CHAPTER 4 ESTIMATES OF MEAN AND ERRORS. 4.1 METHOD OF LEAST SQUARES I n Chapter 2 we defined the mean  of the parent distribution and noted that the.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Statistical Estimation and Sampling Distributions
Sampling: Final and Initial Sample Size Determination
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Probability Densities
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Evaluating Hypotheses
Chapter 11 Multiple Regression.
CHAPTER 6 Statistical Analysis of Experimental Data
Continuous Random Variables and Probability Distributions
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Statistical Intervals Based on a Single Sample.
Standard error of estimate & Confidence interval.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
© Copyright McGraw-Hill CHAPTER 6 The Normal Distribution.
Probability theory: (lecture 2 on AMLbook.com)
Chapter 6 The Normal Probability Distribution
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 8 Continuous.
880.P20 Winter 2006 Richard Kass 1 Maximum Likelihood Method (MLM) Does this procedure make sense? The MLM answers this question and provides a method.
Model Inference and Averaging
Combined Uncertainty P M V Subbarao Professor Mechanical Engineering Department A Model for Propagation of Uncertainty ….
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Introduction to Linear Regression
 A probability function is a function which assigns probabilities to the values of a random variable.  Individual probability values may be denoted by.
Lab 3b: Distribution of the mean
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Physics 114: Lecture 14 Mean of Means Dale E. Gary NJIT Physics Department.
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
1 Mean Analysis. 2 Introduction l If we use sample mean (the mean of the sample) to approximate the population mean (the mean of the population), errors.
Chapter 5 Sampling Distributions. The Concept of Sampling Distributions Parameter – numerical descriptive measure of a population. It is usually unknown.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
ES 07 These slides can be found at optimized for Windows)
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
CHAPTER 2.3 PROBABILITY DISTRIBUTIONS. 2.3 GAUSSIAN OR NORMAL ERROR DISTRIBUTION  The Gaussian distribution is an approximation to the binomial distribution.
CHAPTER- 3 ERROR ANALYSIS. n Chapter 1 we discussed methods for extracting from a set of data points estimates of the mean and standard deviation that.
CHAPTER – 1 UNCERTAINTIES IN MEASUREMENTS. 1.3 PARENT AND SAMPLE DISTRIBUTIONS  If we make a measurement x i in of a quantity x, we expect our observation.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
CHAPTER- 3.2 ERROR ANALYSIS. 3.3 SPECIFIC ERROR FORMULAS  The expressions of Equations (3.13) and (3.14) were derived for the general relationship of.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
CHAPTER- 3.1 ERROR ANALYSIS.  Now we shall further consider  how to estimate uncertainties in our measurements,  the sources of the uncertainties,
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
CHAPTER- 3 ERROR ANALYSIS. n Chapter 1 we discussed methods for extracting from a set of data points estimates of the mean and standard deviation that.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
STATISTICS People sometimes use statistics to describe the results of an experiment or an investigation. This process is referred to as data analysis or.
Chapter 6: Random Errors in Chemical Analysis. 6A The nature of random errors Random, or indeterminate, errors can never be totally eliminated and are.
CHAPTER – 1 UNCERTAINTIES IN MEASUREMENTS. 1.1 MEASURING ERRORS  For all physical experiments, errors and uncertainties exist., that makes an experiment.
Virtual University of Pakistan Lecture No. 34 of the course on Statistics and Probability by Miss Saleha Naghmi Habibullah.
R. Kass/Sp07P416/Lecture 71 More on Least Squares Fit (LSQF) In Lec 5, we discussed how we can fit our data points to a linear function (straight line)
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Chapter 6 – Continuous Probability Distribution Introduction A probability distribution is obtained when probability values are assigned to all possible.
The Maximum Likelihood Method
Parameter Estimation and Fitting to Data
The Maximum Likelihood Method
CONCEPTS OF ESTIMATION
5.2 Least-Squares Fit to a Straight Line
CHAPTER- 3.1 ERROR ANALYSIS.
Parametric Methods Berlin Chen, 2005 References:
CHAPTER- 3 ERROR ANALYSIS T182,
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Chapter 8 Estimation.
CHAPTER 2 PROBABILITY DISTRIBUTIONS.
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Presentation transcript:

CHAPTER 4 ESTIMATES OF MEAN AND ERRORS

4.1 METHOD OF LEAST SQUARES I n Chapter 2 we defined the mean  of the parent distribution and noted that the most probable estimate of the mean  of a random set of observations is the average x of the observations. The justification for that statement is based on the assumption that the measurements are distributed according to the Gaussian distribution. In general, we expect the distribution of measurements to be either Gaussian or Poisson, but because these distributions are indistinguishable for most physical situations we can assume the Gaussian distribution is obeyed. Method of Maximum Likelihood Assume that, in an experiment, we have observed a set of N data points that are randomly selected from the infinite set of the parent population, distributed according to the parent distribution. If the parent distribution is Gaussian with mean  and standard deviation , the probability dP i for making any single observation x i within an interval dx is given by dP i = P i dx (4.1) with probability function Pi = P G (x i, ,  ) [see Equation(2.23)]. For simplicity, we shall denote the probability P i for making an observation x i by

Because, in general, we do not know the mean  of the distribution for a physical experiment, we must estimate it from some experimentally derived parameter. Let us call the estimate  ', What formula for deriving  ' from the data will yield the maximum likelihood that the parent distribution had a mean equal to  ? If we hypothesize a trial distribution with a mean  ' and standard deviation  ' = , the probability of observing the value xi is given by the probability function Considering the entire set of N observations, the probability for observing that particular set is given by the product of the individual probability functions, P i (  '), where the symbol denotes the product of the N probabilities P i (  '). The product of the constants multiplying the exponential in Equation (4.3) is the same as the product to the N th power, and the product of the exponentials is the same as the exponential of the sum of the arguments. Therefore, Equation (4.4) reduces

According to the method of maximum likelihood, if we compare the probabilities P("",') of obtaining our set of observations from various parent populations with different means "",' but with the same standard deviation (J"' = (J", the probability is greatest that the data were derived from a population with "",' = ""'; that is, the most likely population from which such a set of data might have come is assumed to be the correct one. Calculation of the Mean The method of maximum likelihood states that the most probable value for "",' is the one that gives the maximum value for the probability P("",') of Equation (4.5). Because this probability is the product of a constant times an exponential to a negative argument, maximizing the probability P("",') is equivalent to minimizing the argument X of the exponential, (4.6) To find the minimum value of a function X we set the derivative of the function to 0, (4.7) Estimates of Mean and Errors 53 and obtain dX = _1. ~ ~(Xi - "",')2 = ~ (Xi - """) = ° d"",' 2 d"",' (J" (J"2 (4.8) which, because (J" is a constant, gives (4.9) Thus, the maximum likelihood method for estimating the mean by maximizing the probability P("",') of Equation (4.5) shows that the most probable value of the mean is just the average x as defined in Equation (1.1). Estimated Error in the Mean What uncertainty (J" is associated with our determination of the mean "",' in Equation (4.9)? We have assumed that all data points Xi were drawn from the same parent distribution and were thus obtained with an uncertainty characterized by the same standard deviation (J". Each of these data points contributes to the determination of the mean "",' and therefore each data point contributes some uncertainty to the determination of the final results. A histogram of our data points would follow the Gaussian shape, peaking at the value "",' and exhibiting a width corresponding to the standard deviation (J". Clearly we are able to determine the mean to much better than ± (J", and our determination will improve as we increase the number of measured points N and are thus able to improve the agreement between our experimental histogram and the smooth Gaussian curve. In Chapter 3 we developed the error propagation equation [see Equation (3.13)] for finding the contribution of the uncertainties in several terms contributing to a single result. Applying this relation to Equation (4.9) to find the variance (J"~ of the mean "",', we obtain (4.10) where the variance (J"r in each measured data point Xi is weighted by the square of the effect a"",' / aXi' that that data point has on the result. This approximation neglects correlations between the measurements Xi as well as second- and higherorder terms in the expansion of the variance (J"~, but it should be a reasonable approximation as long as none of the data points contributes a major portion of the final result. If the uncertainties of the data points are all equal (J"i = (J", the partial derivatives in Equation (4.10) are simply ~~; = a~i (~~Xi) = ~ (4.11