Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the.

Similar presentations


Presentation on theme: "Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the."— Presentation transcript:

1 Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the bound exactly archived When an efficient estimator exists, it is a ML estimator All ML estimators are efficient for n → ∞ Example: exponential distribution : as (b=0) : 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

2 Several parameters : RCF: “Fisher information matrix” V -1 ~ n and V ~ 1/n → statistical errors decrease in proportion to (for efficient estimators) 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

3 Estimation of covariance matrix : Often difficult to be calculated Estimation through : For 1 parameter Determination of second derivative can be performed analytically or numerically 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

4 Graphical method for determination of variance : Expand logL(θ) in a Taylor series around the ML estimate : We know that : from RCF bound : (can be considered as definition of statistical error) For n → ∞ logL(θ) function becomes parabola ( vanish) When logL is not parabolic → asymmetrical errors central confidence interval 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

5 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

6 Extended maximum likelihood (EML) Model makes predictions on normalization and form of distribution example: differential cross section (angular distribution) absolute normalization is Poisson distributed itself is a function of Extended likelihood function: 1. when then logarithm of EML: leads in general to smaller variance (smaller error) for as the estimator explores additional information from normalization 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

7 2. independent parameter from ; one obtain the same estimators for Contributions to distribution f 1,f 2,f 3 are known; relative contribution is not. p.d.f. : Parameters are not independent as ! One can define only m-1parameters by ML Symmetric – EML : 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

8 by defining (μ i = expected number of events of type i) μ i are no longer subject to constraints; each μ j is Poisson mean Example : signal + background EML-fit of μ s, μ b → can be negative when ? → bias → combination of many experiments has bias ! 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

9 Maximum likelihood with binned data For very large data samples log-likelihood function becomes difficult to compute → make a histogram for each measurement, yielding a central number of entries n = (n 1,…,n N ) in N bins. The expectation values of the number of entries are given by: where and are the bin limits The likelihood function is then : (does not depend on θ ) Simpler to interpret when ?? 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

10 then : (when n i = 0 the [] should be substituted by when (→ method of least squares) For large n tot the binned ML method provides only slightly larger errors as unbinned ML Testing goodness-of-fit with maximum likelihood ML-method dos not directly suggest a method of testing “goodness-of-fit”. After successful fit the results must be checked for consitency 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

11 Possibilities: 1. MC study for L max : a) estimate from many MC experiments b) Histogram logL max c) Compare with the observed L max from the real experiment d) Calculate “P-value” = probability for the observed parameter E[P]=50% (??) 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10 Observed L max L max P-Wert p.d.f. für L max

12 -test divide sample into N bins and build : (n tot = free (Poisson), N-m d.o.f.) or : (n tot = fixed, N-m-1 d.o.f.) For large n tot the statistics given above follows exactly distribution or 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10

13 Combining measurements with maximum likelihood Example : experiment with n measurements x i, p.d.f f(x;θ) second experiment with m measurements y i, p.d.f. f(y;θ) (same par. θ) → common likelihood function : or equivalently: → a common estimation of through max. of L(θ) Suppose two experiments based on a measurements of x and y give estimators and. One reports the estimated standard deviations and as the errors on and When and are independent, then 6. Max. Likelihood 6.1 Maximum likelihood method K. Desch – Statistical methods of data analysis SS10


Download ppt "Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the."

Similar presentations


Ads by Google