Download presentation

Published byAliyah Fulmer Modified over 4 years ago

1
**Course: Neural Networks, Instructor: Professor L.Behera.**

Dirichlet Process -Joy Bhattacharjee, Department of ChE, IIT Kanpur. Johann Peter Gustav Lejeune Dirichlet

2
**What is Dirichlet Process ?**

The Dirichlet process is a stochastic process used in Bayesian nonparametric models of data, particularly in Dirichlet process mixture models (also known as infinite mixture models). It is a distribution over distributions, i.e. each draw from a Dirichlet process is itself a distribution. It is called a Dirichlet process because it has Dirichlet distributed finite dimensional marginal distributions.

3
**What is Dirichlet Process ?**

The Dirichlet process is a stochastic process used in Bayesian nonparametric models of data, particularly in Dirichlet process mixture models (also known as infinite mixture models). It is a distribution over distributions, i.e. each draw from a Dirichlet process is itself a distribution. It is called a Dirichlet process because it has Dirichlet distributed finite dimensional marginal distributions.

4
**What is Dirichlet Process ?**

The Dirichlet process is a stochastic process used in Bayesian nonparametric models of data, particularly in Dirichlet process mixture models (also known as infinite mixture models). It is a distribution over distributions, i.e. each draw from a Dirichlet process is itself a distribution. It is called a Dirichlet process because it has Dirichlet distributed finite dimensional marginal distributions.

5
Dirichlet Priors A distribution over possible parameter vectors of the multinomial distribution Thus values must lie in the k-dimensional simplex Beta distribution is the 2-parameter special case Expectation A conjugate prior to the multinomial xi N

6
**What is Dirichlet Distribution ?**

Methods to generate Dirichlet distribution : Polya’s Urn Stick Breaking Chinese Restaurant Problem

7
Samples from a DP

11
**Dirichlet Distribution**

12
Polya’s Urn scheme: Suppose we want to generate a realization of Q Dir(α). To start, put i balls of color i for i = 1; 2; : : : ; k; in an urn. Note that i > 0 is not necessarily an integer, so we may have a fractional or even an irrational number of balls of color i in our urn! At each iteration, draw one ball uniformly at random from the urn, and then place it back into the urn along with an additional ball of the same color. As we iterate this procedure more and more times, the proportions of balls of each color will converge to a pmf that is a sample from the distribution Dir(α).

13
Mathematical form:

14
**Stick Breaking Process**

The stick-breaking approach to generating a random vector with a Dir(α) distribution involves iteratively breaking a stick of length 1 into k pieces in such a way that the lengths of the k pieces follow a Dir(α) distribution. Following figure illustrates this process with simulation results.

15
**Stick Breaking Process**

0.4 0.6 0.5 0.3 0.3 0.8 0.24 What is G? - A sample from the DP The theta params for each datum are drawn from it Because prob of drawing the same theta twice is positive, it must be discrete Depends somehow on theta and G_0 Stick breaking process G0

16
**Chinese Restaurant Process**

17
**Chinese Restaurant Process**

CRP is a distribution on partitions that captures the clustering effect of the DP

18
**Nested CRP To generate a document given a tree with L levels**

Choose a path from the root of the tree to a leaf Draw a vector of topic mixing proportions from an L-dimensional Dirichlet Generate the words in the document from a mixture of the topics along the path, with mixing proportions

19
Nested CRP Used for modeling topic hierarchies by Blei et. al., 2004. Day 1 Day 2 Day 3

20
Properties of the DP Let (,) be a measurable space, G0 be a probability measure on the space, and be a positive real number A Dirichlet process is any distribution of a random probability measure G over (,) such that, for all finite partitions (A1,…,Ar) of , Draws G from DP are generally not distinct The number of distinct values grows with O(log n)

21
In general, an infinite set of random variables is said to be infinitely exchangeable if for every finite subset {xi,…,xn} and for any permutation we have Note that infinite exchangeability is not the same as being independent and identically distributed (i.i.d.)! Using DeFinetti’s theorem, it is possible to show that our draws are infinitely exchangeable Thus the mixture components may be sampled in any order.

22
**Mixture Model Inference**

We want to find a clustering of the data: an assignment of values to the hidden class variable Sometimes we also want the component parameters In most finite mixture models, this can be found with EM The Dirichlet process is a non-parametric prior, and doesn’t permit EM We use Gibbs sampling instead

23
Finite mixture model

24
**Infinite mixture model**

25
DP Mixture model

26
**Agglomerative Clustering**

Num Clusters Max Distance 20 19 5 18 5 17 5 16 8 15 8 14 8 13 8 12 8 11 9 10 9 9 Pros: Doesn’t need generative model (number of clusters, parametric distribution) Cons: Ad-hoc, no probabilistic foundation, intractable for large data sets 8 10 7 10 6 10 5 10 4 12 3 12 2 15 1 16

27
**Mixture Model Clustering**

Examples: K-means, mixture of Gaussians, Naïve Bayes Pros: Sound probabilistic foundation, efficient even for large data sets Cons: Requires generative model, including number of clusters (mixture components)

28
**Applications Clustering in Natural Language Processing**

Document clustering for topic, genre, sentiment… Word clustering for Part of Speech(POS), Word sense disambiguation(WSD), synonymy… Topic clustering across documents Noun coreference: don’t know how many entities are there Other identity uncertainty problems: deduping, etc. Grammar induction Sequence modeling: the “infinite HMM” Topic segmentation) Sequence models for POS tagging Society modeling in public places Unsupervised machine learning Useful anytime you want to cluster or do unsup learning without specifying the number fo clusters

29
References: Bela A. Frigyik, Amol Kapila, and Maya R. Gupta , University of Washington, Seattle, UWEE Technical report : Introduction to Dirichlet distribution and related processes, report number UWEETR Yee Whye Teh, University College London : Dirichlet Process Khalid-El-Arini, Select Lab meeting, October 2006. Teg Granager, Natural Language Processing, Stanford University : Introduction to Chinese Restaurant problem and Stick breaking scheme. Wikipedia

30
Questions ? Suggest some distributions that can use Dirichlet process to find classes. What are the applications in finite mixture model? Comment on: The DP of a cluster is also a Dirichlet distribution.

Similar presentations

OK

Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.

Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google

Ppt on education problems in india Ppt on ancient number system for class 9 Ppt on time management for employees Ppt on nature and human paintings Ppt on indian entertainment and media industry Ppt on sea level rise due Ppt on plasma arc cutting Animated ppt on magnetism and electromagnetism Ppt on natural resources for class 11 Ppt on diode family search