Presentation on theme: "Goodness of fit, confidence intervals and limits Jorge Andre Swieca School Campos do Jordão, January,2003 fourth lecture."— Presentation transcript:
Goodness of fit, confidence intervals and limits Jorge Andre Swieca School Campos do Jordão, January,2003 fourth lecture
References Statistical Data Analysis, G. Cowan, Oxford, 1998 Statistics, A guide to the Use of Statistical Methods in the Physical Sciences, R. Barlow, J. Wiley & Sons, 1989; Particle Data Group (PDG) Review of Particle Physics, 2002 electronic edition. Data Analysis, Statistical and Computational Methods for Scientists and Engineers, S. Brandt, Third Edition, Springer, 1999
Limits Tens, como Hamlet, o pavor do desconhecido? Mas o que é conhecido? O que é que tu conheces, Para que chames desconhecido a qualquer coisa em especial? Álvaro de Campos (Fernando Pessoa) Se têm a verdade, guardem-na! Lisbon Revisited, Álvaro de Campos
Statistical tests How well the data stand in agreement with given predicted probabilities – hypothesis. null hypothesis H 0 alternative function of measured variables: test statistics error first kind significance level power = error second kind power to discriminate against H 1
Neyman-Pearson lemma Where to place t cut ? H 0 signal H 1 background 1-D: efficiency (and purity) m-D: def. of acceptance region is not obvious Neyman-Pearson lemma: highest power (highest signal purity) for a given significance level α region of t-space such that determined by the desired efficiency
Goodness of fit how well a given null hypothesis H 0 is compatible with the observed data (no reference to other alternative hypothesis) coins: N tosses, n h, n t = N - n h coin fair? H and T equal? test statistic: n h binomial distribution, p=0.5 N=20, n h =17 E[n h ]=Np=
Goodness of fit P= P-value: probability P, under H 0, obtain a result as compatible of less with H 0 than the one actually observed. P-value is a random variable, α is a constant specified before carrying out the test Bayesian statistics: use the Bayes theorem to assign a probability to H 0 (specify the prior probability) P value is often interpreted incorrectly as a prob. to H 0 P-value: fraction of times on would obtain data as compatible with H 0 or less so if the experiment (20 coin tosses) were repeated under similar circunstances
Goodness of fit Easy to identify the region of values of t with equal or less degree of compatibility with the hypothesis than the observed value (alternate hypothesis: p 0.5) optional stopping problem
Significance of an observed signal Whether a discrepancy between data and expectation is sufficiently significant to merit a claim for a new discovery signal event n s, Poisson variable ν S background event n b, Poisson variable ν b prob. to observe n events: experiment: n obs events, quantify our degree of confidence in the discovery of a new effect (ν S 0) How likely is to find n obs events or more from background alone?
Significance of an observed signal Ex: expect ν b =0.5, n obs = 5 P(n>n obs )=1.7x10 -4 this is not the prob. of the hypothesis ν S =0 ! this is the prob., under the hypothesis ν S =0, of obtaining as many events as observed or more.
Significance of an observed signal How to report the measurement? estimate of ν : misleading: only two std. deviations from zero impression that ν S is not very incompatible with zero yes:prob. that a Poisson variable of mean ν b will fluctuate up to n obs or higher no:prob. that a variable with mean n obs will fluctuate down to ν b or lower
Pearsons test histogram of x with N bins niνiniνi construct a statistic which reflects the level of agreement between observed and expected histograms data aprox. gaussian, Poisson distributed with follow a distribution for N degrees of freedom regardless of the distribution of x distribution free larger larger discrepancy between data and the hypothesis
Pearsons test (rule of thumb for a good fit)
Before Poisson variable with Set n tot = fixed n i dist. as multinomial with prob. Not testing the total number of expected and observed Events, but only the distribution of x. large number on entries in each bin p i known Follows a distribution for N-1 degrees of freedom In general, if m parameters estimated from data, n d = N - m
ML: estimator for θ Standard deviation as stat. error n observations of x, hypothesis p.d.f f(x;θ) analytic method RCF bound Monte Carlo graphical standard deviation measurement repeated estimates each based on n obs.: estimator dist. centered around true value θ and with true estimated by and Most practical estimators: becomes approx. Gaussian in the large sample limit.
Classical confidence intervals n obs. of x, evaluate an estimator for a param. θ obtained and its p.d.f. (for a given θ unknown) prob. α prob. β
Classical confidence intervals prob. for estimator to be inside the belt regardless of θ monotonic incresing functions of θ
Classical confidence intervals Usually: central confidence interval a: hypothetical value of for which a fraction of the repeated estimt. would be higher than the obtain.
Classical confidence intervals Relationship between a conf. interval and a test of goodness of fit: test the hypothesys using having equal or less agreement than the result obtained P-value = α (random variable) and θ = a is specified Confidence interval: α is specified first, a is a random quantity depending on the data
Classical confidence intervals Many experiments: the interval would include the true value in It does not mean that the probability that the true value of is in the fixed interval is Frequency interpretation: is not a random variable, but the interval fluctuates since it is constructed from data.
Gaussian distributed Simple and very important application Central limit theorem: any estimator linear function of sum of random variables becomes Gaussian in the large sample limit. known, experiment resulted in