Download presentation

Presentation is loading. Please wait.

Published byVanesa Trease Modified over 2 years ago

1
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Gaussian Mixture Models Speech and Image Processing Unit Department of Computer Science University of Joensuu, FINLAND Ville Hautamäki Clustering Methods: Part 8

2
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Preliminaries We assume that the dataset X has been generated by a parametric distribution p(X). Estimation of the parameters of p is known as density estimation. We consider Gaussian distribution. Figures taken from:

3
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Typical parameters (1) Mean (μ): average value of p(X), also called expectation. Variance (σ): provides a measure of variability in p(X) around the mean.

4
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Typical parameters (2) Covariance: measures how much two variables vary together. Covariance matrix: collection of covariances between all dimensions. –Diagonal of the covariance matrix contains the variances of each attribute.

5
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax One-dimensional Gaussian Parameters to be estimated are the mean (μ) and variance (σ)

6
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Multivariate Gaussian (1) In multivariate case we have covariance matrix instead of variance

7
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Multivariate Gaussian (2) Full covariance Diagonal Single Complete data log likelihood:

8
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Maximum Likelihood (ML) parameter estimation Maximize the log likelihood formulation Setting the gradient of the complete data log likelihood to zero we can find the closed form solution. –Which in the case of mean, is the sample average.

9
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax When one Gaussian is not enough Real world datasets are rarely unimodal!

10
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Mixtures of Gaussians

11
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Mixtures of Gaussians (2) In addition to mean and covariance parameters (now M times), we have mixing coefficients π k. Following properties hold for the mixing coefficients: It can be seen as the prior probability of the component k

12
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Responsibilities (1) Component labels (red, green and blue) cannot be observed. We have to calculate approximations (responsibilities). Complete data Incomplete dataResponsibilities

13
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Responsibilities (2) Responsibility describes, how probably observation vector x is from component k. In clustering, responsibilities take values 0 and 1, and thus, it defines the hard partitioning.

14
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax We can express the marginal density p(x) as: From this, we can find the responsibility of the k th component of x using Bayesian theorem: Responsibilities (3)

15
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Expectation Maximization (EM) Goal: Maximize the log likelihood of the whole data When responsibilities are calculated, we can maximize individually for the means, covariances and the mixing coefficients!

16
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Exact update equations New mean estimates: Covariance estimates Mixing coefficient estimates

17
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax EM Algorithm Initialize parameters while not converged –E step: Calculate responsibilities. –M step: Estimate new parameters –Calculate log likelihood of the new parameters

18
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Example of EM

19
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Computational complexity Hard clustering with MSE criterion is NP-complete. Can we find optimal GMM in polynomial time? Finding optimal GMM is in class NP

20
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Some insights In GMM we need to estimate the parameters, which all are real numbers –Number of parameters: M+M(D) + M(D(D-1)/2) Hard clustering has no parameters, just set partitioning (remember optimality criteria!)

21
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Some further insights (2) Both optimization functions are mathematically rigorous! Solutions minimizing MSE are always meaningful Maximization of log likelihood might lead to singularity!

22
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Example of singularity

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google