G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Probability and Statistics Basic concepts II (from a physicist point of view) Benoit CLEMENT – Université J. Fourier / LPSC
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #21.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Visual Recognition Tutorial
Statistical Data Analysis Stat 3: p-values, parameter estimation
Maximum likelihood (ML) and likelihood ratio (LR) test
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1 Statistical Data Analysis: Lecture 2 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
Bayesian Learning, Cont’d. Administrivia Various homework bugs: Due: Oct 12 (Tues) not 9 (Sat) Problem 3 should read: (duh) (some) info on naive Bayes.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
G. Cowan 2011 CERN Summer Student Lectures on Statistics / Lecture 41 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 14 page 1 Statistical Data Analysis: Lecture 14 1Probability, Bayes’ theorem 2Random variables and.
Chi Square Distribution (c2) and Least Squares Fitting
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 7 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Maximum likelihood (ML)
Lecture II-2: Probability Review
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
880.P20 Winter 2006 Richard Kass 1 Maximum Likelihood Method (MLM) Does this procedure make sense? The MLM answers this question and provides a method.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
G. Cowan 2009 CERN Summer Student Lectures on Statistics1 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability densities,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Statistical Data Analysis: Lecture 4 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
G. Cowan TAE 2010 / Statistics for HEP / Lecture 11 Statistical Methods for HEP Lecture 1: Introduction Taller de Altas Energías 2010 Universitat de Barcelona.
ES 07 These slides can be found at optimized for Windows)
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Lecture 4 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 6 page 1 Statistical Data Analysis: Lecture 6 1Probability, Bayes’ theorem 2Random variables and.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Statistical Data Analysis: Lecture 5 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
G. Cowan Statistical methods for HEP / Freiburg June 2011 / Lecture 1 1 Statistical Methods for Discovery and Limits in HEP Experiments Day 1: Introduction,
G. Cowan INFN School of Statistics, Vietri Sul Mare, 3-7 June Statistical Methods (Lectures 2.1, 2.2) INFN School of Statistics Vietri sul Mare,
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Probability Theory and Parameter Estimation I
Lecture 4 1 Probability (90 min.)
Graduierten-Kolleg RWTH Aachen February 2014 Glen Cowan
Statistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Methods: Parameter Estimation
Computing and Statistical Data Analysis / Stat 8
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 3 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Statistics for Particle Physics Lecture 3: Parameter Estimation
Computing and Statistical Data Analysis / Stat 7
Introduction to Unfolding
Parametric Methods Berlin Chen, 2005 References:
Introduction to Statistics − Day 4
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Presentation transcript:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics, multivariate methods, goodness-of-fit tests 3 Parameter estimation (90 min.) general concepts, maximum likelihood, variance of estimators, least squares 4 Interval estimation (60 min.) setting limits 5 Further topics (60 min.) systematic errors, MCMC

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 2 Parameter estimation The parameters of a pdf are constants that characterize its shape, e.g. r.v. Suppose we have a sample of observed values: parameter We want to find some function of the data to estimate the parameter(s): ← estimator written with a hat Sometimes we say ‘estimator’ for the function of x 1,..., x n ; ‘estimate’ for the value of the estimator with a particular data set.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 3 Properties of estimators If we were to repeat the entire measurement, the estimates from each would follow a pdf: biased large variance best We want small (or zero) bias (systematic error): → average of repeated estimates should tend to true value. And we want a small variance (statistical error): → small bias & variance are in general conflicting criteria

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 4 An estimator for the mean (expectation value) Parameter: Estimator: We find: (‘sample mean’)

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 5 An estimator for the variance Parameter: Estimator: (factor of n  1 makes this so) (‘sample variance’) We find: where

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 6 The likelihood function Consider n independent observations of x: x 1,..., x n, where x follows f (x;  ). The joint pdf for the whole data sample is: Now evaluate this function with the data sample obtained and regard it as a function of the parameter(s). This is the likelihood function: (x i constant)

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 7 Maximum likelihood estimators If the hypothesized  is close to the true value, then we expect a high probability to get data like that which we actually found. So we define the maximum likelihood (ML) estimator(s) to be the parameter value(s) for which the likelihood is maximum. ML estimators not guaranteed to have any ‘optimal’ properties, (but in practice they’re very good).

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 8 ML example: parameter of exponential pdf Consider exponential pdf, and suppose we have data, The likelihood function is The value of  for which L(  ) is maximum also gives the maximum value of its logarithm (the log-likelihood function):

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 9 ML example: parameter of exponential pdf (2) Find its maximum from → Monte Carlo test: generate 50 values using  = 1: We find the ML estimate: (Exercise: show this estimator is unbiased.)

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 10 Functions of ML estimators Suppose we had written the exponential pdf as i.e., we use = 1/ . What is the ML estimator for ? For a function  (  ) of a parameter , it doesn’t matter whether we express L as a function of  or . The ML estimator of a function  (  ) is simply So for the decay constant we have Caveat: is biased, even thoughis unbiased. (bias →0 for n →∞) Can show

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 11 Example of ML: parameters of Gaussian pdf Consider independent x 1,..., x n, with x i ~ Gaussian ( ,  2 ) The log-likelihood function is

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 12 Example of ML: parameters of Gaussian pdf (2) Set derivatives with respect to ,  2 to zero and solve, We already know that the estimator for  is unbiased. But we find, however,so ML estimator for  2 has a bias, but b→0 for n→∞. Recall, however, that is an unbiased estimator for  2.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 13 Variance of estimators: Monte Carlo method Having estimated our parameter we now need to report its ‘statistical error’, i.e., how widely distributed would estimates be if we were to repeat the entire measurement many times. One way to do this would be to simulate the entire experiment many times with a Monte Carlo program (use ML estimate for MC). For exponential example, from sample variance of estimates we find: Note distribution of estimates is roughly Gaussian − (almost) always true for ML in large sample limit.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 14 Variance of estimators from information inequality The information inequality (RCF) sets a minimum variance bound (MVB) for any estimator (not only ML): Often the bias b is small, and equality either holds exactly or is a good approximation (e.g. large data sample limit). Then, Estimate this using the 2nd derivative of ln L at its maximum:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 15 Variance of estimators: graphical method Expand ln L (  ) about its maximum: First term is ln L max, second term is zero, for third term use information inequality (assume equality): i.e., → to get, change  away from until ln L decreases by 1/2.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 16 Example of variance by graphical method ML example with exponential: Not quite parabolic ln L since finite sample size (n = 50).

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 17 Information inequality for n parameters Suppose we have estimated n parameters The (inverse) minimum variance bound is given by the Fisher information matrix: The information inequality then states that V  I  is a positive semi-definite matrix; therefore for the diagonal elements, Often use I  as an approximation for covariance matrix, estimate using e.g. matrix of 2nd derivatives at maximum of L.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 18 Example of ML with 2 parameters Consider a scattering angle distribution with x = cos , or if x min < x < x max, need always to normalize so that Example:  = 0.5,  = 0.5, x min =  0.95, x max = 0.95, generate n = 2000 events with Monte Carlo.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 19 Example of ML with 2 parameters: fit result Finding maximum of ln L( ,  ) numerically ( MINUIT ) gives N.B. No binning of data for fit, but can compare to histogram for goodness-of-fit (e.g. ‘visual’ or  2 ). (Co)variances from ( MINUIT routine HESSE )

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 20 Two-parameter fit: MC study Repeat ML fit with 500 experiments, all with n = 2000 events: Estimates average to ~ true values; (Co)variances close to previous estimates; marginal pdfs approximately Gaussian.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 21 The ln L max  1/2 contour For large n, ln L takes on quadratic form near maximum: The contour is an ellipse:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 22 (Co)variances from ln L contour → Tangent lines to contours give standard deviations. → Angle of ellipse  related to correlation: Correlations between estimators result in an increase in their standard deviations (statistical errors). The ,  plane for the first MC data set

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 23 Extended ML Sometimes regard n not as fixed, but as a Poisson r.v., mean. Result of experiment defined as: n, x 1,..., x n. The (extended) likelihood function is: Suppose theory gives = (  ), then the log-likelihood is where C represents terms not depending on .

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 24 Extended ML (2) Extended ML uses more info → smaller errors for Example: expected number of events where the total cross section  (  ) is predicted as a function of the parameters of a theory, as is the distribution of a variable x. If does not depend on  but remains a free parameter, extended ML gives: Important e.g. for anomalous couplings in e  e  → W + W 

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 25 Extended ML example Consider two types of events (e.g., signal and background) each of which predict a given pdf for the variable x: f s (x) and f b (x). We observe a mixture of the two event types, signal fraction = , expected total number =, observed total number = n. Let goal is to estimate  s,  b. →

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 26 Extended ML example (2) Maximize log-likelihood in terms of  s and  b : Monte Carlo example with combination of exponential and Gaussian: Here errors reflect total Poisson fluctuation as well as that in proportion of signal/background.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 27 Extended ML example: an unphysical estimate A downwards fluctuation of data in the peak region can lead to even fewer events than what would be obtained from background alone. Estimate for  s here pushed negative (unphysical). We can let this happen as long as the (total) pdf stays positive everywhere.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 28 Unphysical estimators (2) Here the unphysical estimator is unbiased and should nevertheless be reported, since average of a large number of unbiased estimates converges to the true value (cf. PDG). Repeat entire MC experiment many times, allow unphysical estimates:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 29 ML with binned data Often put data into a histogram: Hypothesis iswhere If we model the data as multinomial (n tot constant), then the log-likelihood function is:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 30 ML example with binned data Previous example with exponential, now put data into histogram: Limit of zero bin width → usual unbinned ML. If n i treated as Poisson, we get extended log-likelihood:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 31 Relationship between ML and Bayesian estimators In Bayesian statistics, both  and x are random variables: Recall the Bayesian method: Use subjective probability for hypotheses (  ); before experiment, knowledge summarized by prior pdf  (  ); use Bayes’ theorem to update prior in light of data: Posterior pdf (conditional pdf for  given x)

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 32 ML and Bayesian estimators (2) Purist Bayesian: p(  | x) contains all knowledge about . Pragmatist Bayesian: p(  | x) could be a complicated function, → summarize using an estimator Take mode of p(  | x), (could also use e.g. expectation value) What do we use for  (  )? No golden rule (subjective!), often represent ‘prior ignorance’ by  (  ) = constant, in which case But... we could have used a different parameter, e.g., = 1/ , and if prior   (  ) is constant, then  ( ) is not! ‘Complete prior ignorance’ is not well defined.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 33 The method of least squares Suppose we measure N values, y 1,..., y N, assumed to be independent Gaussian r.v.s with Assume known values of the control variable x 1,..., x N and known variances The likelihood function is We want to estimate , i.e., fit the curve to the data points.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 34 The method of least squares (2) The log-likelihood function is therefore So maximizing the likelihood is equivalent to minimizing Minimum defines the least squares (LS) estimator Very often measurement errors are ~Gaussian and so ML and LS are essentially the same. Often minimize  2 numerically (e.g. program MINUIT ).

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 35 LS with correlated measurements If the y i follow a multivariate Gaussian, covariance matrix V, Then maximizing the likelihood is equivalent to minimizing

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 36 Example of least squares fit Fit a polynomial of order p:

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 37 Variance of LS estimators In most cases of interest we obtain the variance in a manner similar to ML. E.g. for data ~ Gaussian we have and so or for the graphical method we take the values of  where 1.0

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 38 Two-parameter LS fit

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 39 Goodness-of-fit with least squares The value of the  2 at its minimum is a measure of the level of agreement between the data and fitted curve: It can therefore be employed as a goodness-of-fit statistic to test the hypothesized functional form (x;  ). We can show that if the hypothesis is correct, then the statistic t =  2 min follows the chi-square pdf, where the number of degrees of freedom is n d = number of data points  number of fitted parameters

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 40 Goodness-of-fit with least squares (2) The chi-square pdf has an expectation value equal to the number of degrees of freedom, so if  2 min ≈ n d the fit is ‘good’. More generally, find the p-value: E.g. for the previous example with 1st order polynomial (line), whereas for the 0th order polynomial (horizontal line), This is the probability of obtaining a  2 min as high as the one we got, or higher, if the hypothesis is correct.

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 41 Wrapping up lecture 3 No golden rule for parameter estimation, construct so as to have desirable properties (small variance, small or zero bias,...) Most important methods: Maximum Likelihood, Least Squares Several methods to obtain variances (stat. errors) from a fit Analytically Monte Carlo From information equality / graphical method Finding estimator often involves numerical minimization

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 42 Extra slides for lecture 3 Goodness-of-fit vs. statistical errors Fitting histograms with LS Combining measurements with LS

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 43 Goodness-of-fit vs. statistical errors

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 44 Goodness-of-fit vs. stat. errors (2)

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 45 LS with binned data

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 46 LS with binned data (2)

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 47 LS with binned data — normalization

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 48 LS normalization example

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 49 Using LS to combine measurements

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 50 Combining correlated measurements with LS

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 51 Example: averaging two correlated measurements

G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 52 Negative weights in LS average