Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sequence Clustering and Labeling for Unsupervised Query Intent Discovery Speaker: Po-Hsien Shih Advisor: Jia-Ling Koh Source: WSDM’12 Date: 1 November,

Similar presentations


Presentation on theme: "Sequence Clustering and Labeling for Unsupervised Query Intent Discovery Speaker: Po-Hsien Shih Advisor: Jia-Ling Koh Source: WSDM’12 Date: 1 November,"— Presentation transcript:

1 Sequence Clustering and Labeling for Unsupervised Query Intent Discovery Speaker: Po-Hsien Shih Advisor: Jia-Ling Koh Source: WSDM’12 Date: 1 November, 2012

2 Outline Introduction Feature Representation Clustering Intent Summarization Instance Annotation Experiment Conclusion

3 Introduction In the past... we can get a list like this... One standard search result....

4 Introduction Now the modern search engine... not only improving ranking result allowing domain-dependent types of semantically enriched search result. For Example: [city] weather [movie] showtimes [location] What can we get ?

5 Introduction Key issue: To connect the surface-level query to a structured data source according to the intent of the user. An intent understanding system needs: Domain classification Intent detection Slot filling

6 Introduction Goal: Discover popular user intents and their associated slots, and to annotate query instances accordingly in a completely unsupervised manner. Define intent as represented by a pattern consisting of a sequence of semantic concepts or lexical items. The paper present a sequence clustering method for finding such clusters of queries with similar intent, and an intent summarization method for labeling clusters with such an intent pattern that describes the queries in the cluster. It discovers the patterns and slots in a domain and allows classification of new queries not used in clustering into the discovered patterns.

7 Introduction Q = Harry Potter showtimes Boston Domain - Movie Intent - FindShowtimes Slots: Title:”Harry Potter”, Location = “Boston” Q => [Movie.title] showtimes [Location] [Madagascar2] showtimes [London] [Batman] showtimes [Sydney]

8 Feature Representation Using feature vector to represent query q = (w 1,w 2,w 3,...,w M ) - has M word tokens x i = (x i,1,x i,2,...,x i,N ), i = 1,2,...,M ex. q = (2010 buick regal review), M = 4 Knowledge Base Freebase is a large knowledge base with structured data from many sources. We created a list of concepts for each surface form by merging the concepts of all entities for that surface form.

9 q = (2010, buick, regal, review) x 1 = [0,1,0,1,1,...,0,0] T X q = (x 1 q, x 2 q, x 3 q,...,x M q ) : N*M N = N S + N L N S : Based on the knowledge base data (FreeBase), ex. [car], [episode],... N L : Extract from Wall Street Journal corpus except for proper noun and cardinal numbers. ex. review, test Feature Representation Why we assign the value 1? Want to produce more useful intent summarization

10 Feature Representation REF(y) = 1/size(y), similar with IDF y is a Freebase concept or lexical item size(y) is 1 if y is a lexical item or the number of surface forms that concept y contains.

11 Clustering Goal - Discover groups of queries with similar intents. We adopt a bottom-up approach based on similarity metric and choose agglomerative clustering using a distance metric based on dynamic time warping(DTW) between a pair of sequence.

12 Distance Metric r i = (REF(1)*x i,1,...,REF(N)*x i,N ) T, the REF-weighted vector for x i - a distance on a pair of static feature vectors

13 Distance Metric Alignment - a set of edges E connecting the two sequences of nodes Every node has at least one edge, and There are no crossing edges The optimal alignment is the one that produces the smallest total distance computed by summing over all edges

14 Distance Metric = DTW

15 Agglomerative Clustering Single-link clustering complete-link clustering Stop criterion Set the threshold of the minimum inter-cluster distance of remaining clusters set a fixed number of merging iterations.

16 Intent Summarization Goal : Produce a pattern that describe the intent of clusters Pattern: a label sequence of slots and lexical items p = (y 1,y 2,...,y L ), L is the length of label L <= the length of the queries being summarized. Advantage: It can discover intents and inform development of structured search and a human-readable to connect the intents to a structured database useful. The patterns produced by intent summarization would allow generalization of the intents to new query instances.

17 Intent Summarization Step I - The queries are segmented and aligned Words that belong to the same entity in Freebase have very similar feature vector representations(ex. San Francisco) Merging words into a segment if the cosine similarity is below a low threshold.

18 Intent Summarization Step II - For each position in the pattern, we select a Freebase concept or lexical item to represent the segments at that position. Suppose we have a cluster of K queries y i = generalize(S i =S i 1,...,s i K ) Free-base concept or lexical item that summarizes the input segments. One natural way to do generalization is to pick the concept that contains the most input segments according to Free-base. Not to be robust.

19 Intent Summarization From the Freebase concept omissions and ambiguity of the segments can lead to incorrect generalizations. Ex.“2004 audi s4 review”, “2010 buick regal review”, and “09 acura rl review” : The Freebase concept for [year] contains “2004” and “2010”, but not “09” Noise.

20 Intent Summarization The following model gives the joint probability of the segments and the concept: Each of the segments in the cluster can be modeled independently of the others To solve the problem above:(using REF score to initialize parameter) F represents the total number of surface forms in Freebase.

21 Intent Summarization This initialization prefers concepts with higher REF scores and lexical items, it is not ideal because it is susceptible to overly-specific concepts. Ex.[breed origin] or [olympic participating country]=> [country] concept. It can solve by EM algorithm.

22 Intent Summarization The maximized expression is simple to calculate for each concept and lexical item => E-step M-step, we reestimate model parameters count(y) is the number of times y appeared in the E-step’s classification, and the denominator is the total number of classifications made.

23 Intent Summarization Any clusters that have the same pattern after intent summarization are merged. It shows the output of intent summarization on the example query cluster. We could further merge clusters based on semantic intent, as an intent may be expressed by several different, but related patterns. [city] map or map of [city]

24 Instance Annotation For each pattern p = (y 1,y 2,...,y L ), we define z i to be a feature vector in the same space as the word tokens that represents the i-th element in the pattern. Given this representation of a pattern, a new instance can be classified into one of the existing patterns, Z ∗.If the minimum distance is above a threshold τ, the query is judged to be not similar enough to any of the existing patterns and not classified into one.

25 Experiment Cluster Pattern Precision Query Labeling Precision Domain Coverage

26 Evaluation1 Cluster Pattern Precision The single-link clusters were more precise the complete-link algorithm discovered more clusters.

27 Evaluation 2 Query Labeling Precision

28 Evaluation 3 Domain Coverage

29 Conclusion We have presented an unsupervised method to find clusters of queries with similar intent, induce patterns to describe these clusters, and classify new queries into the cluster that are discovered, all using only domain- independent resources. Our work can facilitate structured search development by reducing the amount of manual labour needed to extend structured search into a new domain.


Download ppt "Sequence Clustering and Labeling for Unsupervised Query Intent Discovery Speaker: Po-Hsien Shih Advisor: Jia-Ling Koh Source: WSDM’12 Date: 1 November,"

Similar presentations


Ads by Google