Presentation is loading. Please wait.

Presentation is loading. Please wait.

Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.

Similar presentations


Presentation on theme: "Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of."— Presentation transcript:

1 Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of  the probability density at X=X i is: Likelihood of full set of measurements for any given  is: Maximum likelihood estimator of  is  then given by i XiXi

2 Take logs and maximize likelihood: Note that result is unbiased since Maximum likelihood estimate of i XiXi    XiXi

3 Variance of ML estimate Algebra of random variables gives This is a minimum-variance estimate -- it’s independent of  and  Important note: the error bars on the X i are derived from the model, not from the data!

4 Error bars attach to the model, not to the data! Example: Poisson data X i. How can you attach an error bar to the data points? The right way: is the mean count rate predicted by the model. The wrong way: if you assign then when X i =0,  (0)=0, giving: Assigning  (X i )= √X i gives a downward bias because points lower than average get smaller error bars, and hence more weight than they deserve. i XiXi

5 Confidence interval on a single parameter  The 1  confidence interval on  includes 68% of the area under the likelihood function: 22 L(  )     2  

6 Fitting a line to data – 1 Fit a line y = ax + b to a single data point: Blue lines have  2 = 0 Red lines have  2 = 1  2 contours in the (a,b) plane look like this: Solution is not unique, since 2 parameters are constrained by only 1 data point. Bayes: prior P(a,b) will determine value of a. a b  2 = 0  2 = 1

7 Fitting a line to data – 2a Fitting a line y = ax + b to 2 data points: –red lines give  2 = 2 –blue line gives  2 = 0 Note that a, b are not independent. b x y b a  2 = 0  2 = 2 All solutions (a,b) lying on red ellipse give  2 = 2

8 Independent vs. correlated parameters a and b are not independent in this example. To find the optimal (a,b) we must: –minimize  2 with respect to a at a sequence of fixed b –then minimise the resulting  2 values with respect to b. If a and b were independent, then all slices through the  2 surface at each fixed b would have same shape. Similarly for a. So we could optimize them independently, saving a lot of calculation. How can we make a and b independent of each other?

9 Fitting a line to data – 2b Fitting a line y = a(x-x) + b to 2 data points: –red lines give  2 = 2 –blue line gives  2 = 0 Note that a, b are now independent. b x y b a  2 = 0  2 = 2 All solutions (a,b) lying on red ellipse give  2 = 2

10 Intercept and slope for independent a, b Intercept: Slope: b a 22 22 ab

11 Choosing orthogonal parameters Good practice. Results for any one parameter don’t depend on values of other parameters. Example: fitting a gaussian profile. Parameters to be fitted are: –Width, w –Area or peak value. Which is best? Area is independent of width – good Peak value depends on width – bad P A


Download ppt "Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of."

Similar presentations


Ads by Google