Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hierarchical Dirichlet Process and Infinite Hidden Markov Model Duke University Machine Learning Group Presented by Kai Ni February 17, 2006 Paper by Y.

Similar presentations


Presentation on theme: "Hierarchical Dirichlet Process and Infinite Hidden Markov Model Duke University Machine Learning Group Presented by Kai Ni February 17, 2006 Paper by Y."— Presentation transcript:

1 Hierarchical Dirichlet Process and Infinite Hidden Markov Model Duke University Machine Learning Group Presented by Kai Ni February 17, 2006 Paper by Y. W. Teh, M. I. Jordan, M. J. Beal & D. M. Blei, NIPS 2004

2 Outline Motivation Dirichlet Processes (DP) Hierarchical Dirichlet Processes (HDP) Infinite Hidden Markov Model (iHMM) Results & Conclusions

3 Motivation Problem – “multi-task learning” in which the “tasks” are clustering problems. Goal – Share clusters among multiple, related clustering problems. The number of clusters are open-ended and inferred automatically by the model. Application –Genome pattern analysis –Information retrieval of corpus

4 Hierarchical Model A single clustering problem can be analyzed as a Dirichlet process (DP). – –Draws G from DP are discrete, generally not distinct. For J groups, we consider G j for j=1~J is a group-specific DP. To share information, we link the group-specific DPs – If G(τ) is continuous, the draws G j have no atoms in common with probability one. –HDP solution: G 0 is itself a draw from a DP( , H)

5 Dirichlet Process & Hierarchical Dirichlet Process Three different perspectives –Stick-breaking –Chinese restaurant –Infinite mixture models Setup Properties of DP –

6 Stick-breaking View A mathematical explicit form of DP. Draws from DP are discrete. In DP In HDP

7 DP – Chinese Restaurant Process Exhibit clustering property Φ 1,…,Φ i-1, i.i.d., r.v., distributed according to G; Ө 1,…, Ө K to be the distinct values taken on by Φ 1,…,Φ i-1, n k be # of Φ i ’= Ө k, 0<i’<i,

8 HDP – Chinese Restaurant Franchise First level: within each group, DP mixture – –Φ j1, …,Φ j(i-1), i.i.d., r.v., distributed according to G j ; Ѱ j1, …, Ѱ jT j to be the values taken on by Φ j1, …,Φ j(i-1), n jk be # of Φ ji ’ = Ѱ jt, 0<i ’ <i. Second level: across group, sharing clusters –Base measure of each group is a draw from DP: –Ө 1, …, Ө K to be the values taken on by Ѱ j1, …, Ѱ jT j, m k be # of Ѱ jt =Ө k, all j, t.

9 HDP – CRF graph The values of  are shared between groups, as well as within groups. This is a key property of HDP. Integrating out G 0

10 DP Mixture Model One of the most important application of DP: nonparametric prior distribution on the components of a mixture model. G can be looked as an infinite mixture model.

11 HDP can be used as the prior distribution over the factors for nested group data. We consider a two-level DPs. G 0 links the child G j DPs and forces them to share components. G j is conditionally independent given G 0 HDP mixture model

12 Infinite Hidden Markov Model The number of hidden states is allowed to be countably infinite. The transition probabilities given in the i th row of the transition matrix A can be interpreted as mixing proportions  = (a i1, a i2, …, a ik, … ) Thus each row of the A in HMM is a DP. Also these DPs must be linked, because they should have same set of “next states”. HDP provides the natural framework for the infinite HMM.

13 iHMM via HDP Assign observations to groups, where the groups are indexed by the value of the previous state variable in the sequence. Then the current state and emission distribution define a group-specific mixture model. Multiple iHMMs can be linked by adding an additional level of Bayesian hierarchy, letting a master DP couple each of the iHMM, each of which is a set of DPs.

14 HDP & iHMM HDP (CRF aspect)iHMM GroupRestaurant J (fixed)By S i-1 (random) DataCustomer x ji yiyi Hidden factor Table  ji =  k, k=1~  Dish  k ~ H S i = k, k=1~  B ( S i, : ) DP weights Popularity  jk, k=1~  A ( S i-1, : ) Likelihood F(x ji |  ji ) B ( S i, y i )

15 Non-trivialities in iHMM HDP assumes a fixed partition of the data into groups while HMM is for time-series data, and the definition of groups is itself random. Consider CRF aspect of HDP, the number of restaurant is infinite. Also in the sampling scheme, changing s t may affect all subsequent data assignment. CRF is natural to describe the iHMM, however it is awkward for sampling. We need to use sampling algorithm from other respects for the iHMM.

16 HDP Results

17 iHMM Results

18 Conclusion HDP is a hierarchical, nonparametric model for clustering problems involving multiple groups of data. The mixture components are shared across groups and the appropriate number is determined by HDP automatically. HDP can be extended to infinite HMM model, providing effective inference algorithm.

19 Reference Y.W. Teh, M.I. Jordan, M.J. Beal and D.M. Blei, “Sharing Clusters among Related Groups: Hierarchical Dirichlet Processes”, NIPS 2004. Beal, M.J., Ghahramani, Z. and Rasmussen, C.E., “The Infinite Hidden Markov Model”, NIPS 2002 Y.W. Teh, M.I. Jordan, M.J. Beal and D.M. Blei, “Hierarchical Dirichlet Processes”, Revised version to appear in JASA, 2006.


Download ppt "Hierarchical Dirichlet Process and Infinite Hidden Markov Model Duke University Machine Learning Group Presented by Kai Ni February 17, 2006 Paper by Y."

Similar presentations


Ads by Google