Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dirichlet Processes in Dialogue Modelling Nigel Crook March 2009.

Similar presentations


Presentation on theme: "Dirichlet Processes in Dialogue Modelling Nigel Crook March 2009."— Presentation transcript:

1 Dirichlet Processes in Dialogue Modelling Nigel Crook March 2009

2 The COMPANIONS project Dialogue Acts Document Clustering Multinomial Distribution Dirichlet Distribution Graphical Models Bayesian Finite Mixture Models Dirichlet Processes Chinese Restaurant Process Concluding Thoughts Inputs and Outputs Overview With thanks to... Percy Liang and Dan Klein (UC Berkeley) 1 1 Structured Bayesian Nonparametric Models with Variational Inference, ACL Tutorial in Prague, Czech Republic on June 24, 2007.

3 The COMPANIONS project COMPANIONS: Intelligent, Persistent, Personalised Multimodal Interfaces to the Internet One Companion on many platforms Okay, but please play some relaxing music then Your pulse is a bit high, please slow down a bit.

4 The COMPANIONS project Proposed Dialogue System Architecture USER Language Understand Speech Sinthesizer Language Generation Speech Recognition Signal Words Dialogue Model Concepts User Intentions (DAs) Dialogue Manager DB System Intentions (DAs)

5 Dialogue Acts A Dialogue Act is a linguistic abstraction that attempts to capture the intension/purpose of an utterance. DAs are based on the concept of a speech act – When we say something, we do something (Austin, 1962) Examples of DAs labels using the DAMSL scheme on the Switchboard corpus : ExampleDialogue Act Me, Im in the legal department.Statement-non-opinion Uh-huh.Acknowledge (Backchannel) I think its greatStatement-opinion Thats exactly it.Agree/Accept So, -Abandoned or Turn-Exit I can imagine.Appreciation Do you have to have any special training?Yes-No-Question

6 Dialogue Act Classification Research question: Can major DA categories be identified automatically through the clustering of utterances? Each utterance can be treated as a bag of (content) words … WhattimeisthenexttraintoOxford? Can then apply methods from document clustering

7 Document Clustering Working example: Document clustering

8 Document Clustering Each document is a bag of (content) words How many clusters? In parametric methods the number of clusters is specified at the outset. Bayesian nonparametric methods (Gaussian Processes and Dirichlet Processes) automatically detect how many clusters there are.

9 Multinomial Distribution A multinomial probability distribution is a distribution over all the possible outcomes of multinomial experiment A fair dice A weighted dice Each draw from a multinomial distribution yields an integer e.g. 5, 2, 3, 2, 6 …

10 0 Dirichlet Distribution Each point on a k dimensional simplex is a multinomial probability distribution:

11 Dirichlet Distribution A Dirichlet Distribution is a distribution over multinomial distributions in the simplex

12 Dirichlet Distribution The Dirichlet Distribution is parameterised by a set of concentration constants defined over the k-simplex A draw from a Dirichlet Distribution written as: where is a multinomial distribution over k outcomes.

13 Dirichlet Distribution Example draws from a Dirichlet Distribution over the 3-simplex: Dirichlet(5,5,5) Dirichlet(0.2, 5, 0.2) Dirichlet(0.5,0.5,0.5)

14 Graphical Models AB p(A,B) = p(B|A)p(A) ABiBi i n AB2B2 BnBn B1B1

15 ~Dirichlet k (, …, ) Components z (z (1 … k)) are drawn from a base measure G 0 z ~ G 0 (e.g. Dirichlet v (, …, )) For each data point (document) a component z is drawn: z i ~ Multinomial( ) and the data point is drawn from some distribution F( ) x i ~ F( z )(e.g. Multinomial ( z )) i i Bayesian Finite Mixture Model zizi in xixi z z k Parameters: = (, ) = ( 1 … k, 1 … k ) Hidden variables z = (z 1 … z n ) Observed data x = (x 1 … x n )

16 Bayesian Finite Mixture Model Document clustering example: k = 2 clusters ~Dirichlet k (, ) 12 Choose a source for each data point (document) i {1, … n} : z i ~ Multinomial k ( ) z 1 = 1z 2 = 2z 3 = 2z 4 = 1z 5 = 2 Generate the data point (words in document) using source: x i ~ Multinomial v ( z )) x i = ACAAB x 2 = ACCBCC x 3 = CCC x 4 = CABAAC x 5 = ACC z 2 z v = 3 word types z ~ Dirichlet v (,, )

17 Data Generation Demo Component = 1 words = Id: 0 [1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1] Component = 0 words = Id: 1 [0, 1, 2, 2, 0, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 2] Component = 1 words = Id: 2 [1, 1, 2, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1] Component = 1 words = Id: 3 [1, 1, 1, 1, 1] Component = 1 words = Id: 4 [1, 1, 1, 1, 1, 1, 1, 1] Component = 0 words = Id: 5 [0, 2, 0, 0, 0, 2, 2, 0, 0, 1, 0, 2, 0, 2, 1, 2, 0, 0, 2] Component = 1 words = Id: 6 [1, 1, 1, 1, 1, 1] Component = 0 words = Id: 7 [0, 2, 2, 0, 0, 2, 2, 0, 2, 0] Component = 0 words = Id: 8 [0, 0, 2, 1, 2, 2] Component = 0 words = Id: 9 [2, 0, 1, 0, 2, 0, 2, 1, 0, 2, 2, 1, 1, 2, 0] Component = 1 words = Id: 10 [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2] Component = 2 words = Id: 11 [0, 0, 2, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0] Component = 0 words = Id: 12 [1, 0, 1, 0, 0, 0, 2, 2, 0, 0, 2, 0, 2, 1, 0, 0] Component = 1 words = Id: 13 [1, 1, 1, 2, 1, 1, 1] Component = 0 words = Id: 14 [0, 2, 2, 0, 2, 0, 2, 0, 0, 0, 2, 1, 2] Component = 0 words = Id: 15 [2, 0, 0, 0, 1, 2, 0, 2, 0, 2, 0, 2, 0] Component = 1 words = Id: 16 [1, 1, 1, 1, 1] Component = 0 words = Id: 17 [1, 1, 0, 0, 2, 1, 2, 0, 0, 0, 1, 2, 1] Component = 1 words = Id: 18 [1, 1, 1, 1, 1, 1, 0, 2, 1] Component = 1 words = Id: 19 [1, 1, 0, 2, 1, 1, 1, 1, 0] Component = 2 words = Id: 20 [0, 1, 0, 2, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 2]

18 Dirichlet Processes Dirichlet Processes can be thought of as a generalisation of infinite-dimensional Dirichlet distributions … but not quite! As the dimension k of a Dirichlet distribution increases … k = 2k = 4k = 6k = 8k = 10k = 12k = 18 Dirichlet distribution is symmetric For a Dirichlet Process need the larger components to appear near the beginning of the distribution on average

19 Dirichlet Processes Stick breaking construction (GEM) … 1

20 Dirichlet Processes Mixture Model Definition ~ Dirichlet k (, …, ) Components z z (1 … k) z ~ G 0 For each data point (document) z is drawn : z i ~ Multinomial( ) and the data point is drawn from some distribution F( ) x i ~ F( z )(e.g. Multinomial ( z )) i i GEM( ) (1 … )

21 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 The Chinese Restaurant Process is one view of DPs Tables = clusters Customers = data points (documents) Dishes = component parameters Chinese Restaurant Process …

22 Shut your eyes if you dont want to see any more maths … i | 1, …, i-1 ~ The rich get richer principle: tables with more customers get more customers on average

23 CRP Initial Clustering Demo Initial table allocations 100 documents 3 sources 5 to 20 words per document

24 CRP Table parameters Each cluster (table) is given a parameter (dish) i which all the data points (customers) in that cluster share. These are drawn from the base measure G 0 (a Dirichlet distribution in this case)

25 CRP Inference Goal of Bayesian inference is to calculate the posterior: zizi in xixi z z k p(,, z | x) The posterior cannot usually be sampled directly. Can use Gibbs sampling …

26 CRP Inference - reclustering x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x x4x4 x2x2

27 CRP Inference – table updates x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x ( ) = ++ + =

28 CRP Inference Demo

29 Concluding Thoughts CRP Works well at the toy document clustering example Document size 100+ words Up to 6 word types 100 – 500 documents Will it work when clustering utterances? Utterance size 1 – 20 words Up to 6 word types 100 – 500 documents This is much a much harder classification problem


Download ppt "Dirichlet Processes in Dialogue Modelling Nigel Crook March 2009."

Similar presentations


Ads by Google