Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #3.

Similar presentations


Presentation on theme: "Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #3."— Presentation transcript:

1 Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #3

2 2 Parametric methods  These methods are useful when the underlying distribution is known in advance or is simple enough to be modeled by a simple distribution function or a mixture of such functions  The parametric model is very compact (low memory and CPU usage) where only few parameters need to fit.  The model’s parameters are estimated using these methods such as maximum likelihood estimation, Bayesian estimation and expectation maximization. o A location parameter simply shifts the graph left or right on the horizontal axis. o A scale parameter (>1) stretches or (<1 ) compress the pdf,

3 3 Parametric methods (1- Maximum Likelihood Estimator: MLE)  Suppose that n samples x 1, x 2, …, x n are drawn independently and identically distributed (i.i.d.) ~ distribution φ(θ) with vector of parameters θ=(θ 1,…., θ r )  Know: data samples, the distribution type Unknown : θ ?????  MLE Method estimates θ by maximizing the log likelihood of the data By i.i.d. monotonicity of log To show the dependence of p on ɵ explicitly.

4 4 Parametric methods (1- Maximum Likelihood Estimator: MLE)  Let  Then calculate  Find θ by letting Example: Suppose that n samples x 1, x 2, …, x n are drawn independently and identically distributed (i.i.d.)~ 1D N(μ,σ) find the MLE of μ and σ Matlab Demo  In some cases we can find a closed form for θ  Coin Example ………………..

5 5  An estimator of a parameter is unbiased if the expected value of the estimate is the same as the true value of the parameters. Parametric methods (1- Maximum Likelihood Estimator: MLE) Example:  An estimator of a parameter is biased if the expected value of the estimate is different from the true value of the parameters. Example: doesn’t make much difference once n --> large

6 6 What if there are distinct subpopulations in observed data? Parametric methods Example  Pearson in 1894, tried to model the distribution of the ratio between measurements of forehead and body length on crabs.  He used a two-component mixture.  It was hypothesized that the two-component structure was related to the possibility of this particular population of crabs evolving into two new subspecies Mixture Model The underlying density is assumed to have the form The weights, Constrained Components of the mixture are densities and are parameterized by What is the difference between Mixture Model and the kernel-based estimator?

7 7 Parametric methods Example  Given that { x i, C i1, C i2 } n samples (complete data) drawn i.i.d. two normal distributions  x i observed value of i th instance  C i1 and C i2 indicate which of two normal distributions was used to generate x i  C ij =1 if C ij was used to generate x i, 0 otherwise  MLE How can we estimate the parameters given incomplete data (don’t know C i1 and C i2 )?

8 8 Parametric methods (2- Expectation Maximization: EM)  The EM algorithm is a general method of finding the maximum-likelihood estimate of the parameters of an underlying distribution from a given data set when the data is incomplete or has missing values. EM Algorithm:  Given initial parameters Θ 0  Repeatedly o Re-estimating expected values of hidden binary variables C ij o Then recalculate the MLE of Θ using these expected values for the hidden variables  Note:  EM unsupervised method, but MLE supervised  To use EM you must to know: Number of classes K, Parametric form of the distribution.

9 9 Illustrative example complete incomplete

10 10 Illustrative example

11 11 Parametric methods (2- Expectation Maximization: EM)  Assume that a joint density function for complete data set  The EM algorithm first finds the expected value of the complete-data log- likelihood with respect to the unknown data C given the observed data x and the current parameter estimates Θ i-1. The current parameters estimates that we used to evaluate the expectation The new parameters that we optimize to maximize Q  The evaluation of this expectation is called the E-step of the algorithm  The second step (the M-step) of the EM algorithm is to maximize the expectation we computed in the first step.  These two steps are repeated as necessary. Each iteration is guaranteed to increase the log likelihood and the algorithm is guaranteed to converge to a local maximum of the likelihood function.

12 12 Parametric methods (2- Expectation Maximization: EM)  The mixture-density parameter estimation problem  Using Bayes’s rule, we can compute:  Then E-step  Grades Example ………………..

13 13 Parametric methods (2- Expectation Maximization: EM)  For some distributions, it is possible to get an analytical expressions for  For example, if we assume d-dimensional Gaussian component distributions M-step E-step

14 14 Parametric methods (2- Expectation Maximization: EM) Example:  MATLAB demo


Download ppt "Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #3."

Similar presentations


Ads by Google