Presentation is loading. Please wait.

Presentation is loading. Please wait.

R Kass/SP07 P416 Lecture 4 1 Propagation of Errors ( Chapter 3, Taylor ) Introduction Example: Suppose we measure the current (I) and resistance (R) of.

Similar presentations


Presentation on theme: "R Kass/SP07 P416 Lecture 4 1 Propagation of Errors ( Chapter 3, Taylor ) Introduction Example: Suppose we measure the current (I) and resistance (R) of."— Presentation transcript:

1 R Kass/SP07 P416 Lecture 4 1 Propagation of Errors ( Chapter 3, Taylor ) Introduction Example: Suppose we measure the current (I) and resistance (R) of a resistor. Ohm's law relates V and I: V = IR If we know the uncertainties (e.g. standard deviations) in I and R,what is the uncertainty in V? More formally, given a functional relationship between several measured variables (x, y, z), Q=f(x, y, z) What is the uncertainty in Q if the uncertainties in x, y, and z are known? To answer this question we use a technique called Propagation of Errors. Usually when we talk about uncertainties in a measured variable such as x, we write: x . In most cases we assume that the uncertainty is “Gaussian” in the sense that 68% of the time we expect the “true” (but unknown) value of x to be in an interval given by [x- , x+  ]. BUT not all measurements can be represented by Gaussian distributions (more on that later)! Propagation of Error Formula To calculate the variance in Q as a function of the variances in x and y we use the following: If the variables x and y are uncorrelated then  xy = 0 and the last term in the above equation is zero. We can derive the above formula as follows: Assume we have several measurement of the quantities x (x 1, x 2...x N ) and y (y 1, y 2...y N ). The average of x and y:

2 R Kass/SP07 P416 Lecture 4 2 define: evaluated at the average values expand Q i about the average values: assume the measured values are close to the average values, and neglect the higher order terms: If the measurements are uncorrelatedthe summation in the above equation is zero uncorrelated errors Since the derivatives are evaluated at the average values (  x,  y ) we can pull them out of the summation

3 R Kass/SP07 P416 Lecture 4 3 If x and y are correlated, define  xy as: Example: Power in an electric circuit. P = I 2 R Let I = 1.0 ± 0.1 amp and R = 10. ± 1.0   P = 10 watts calculate the variance in the power using propagation of errors assuming I and R are uncorrelated P = 10± 2 watts If the true value of the power was 10 W and we measured it many times with an uncertainty (  ) of ± 2 W and Gaussian statistics apply then 68% of the measurements would lie in the range [8,12] watts Sometimes it is convenient to put the above calculation in terms of relative errors: In this example the uncertainty in the current dominates the uncertainty in the power! Thus the current must be measured more precisely if we want to reduce the uncertainty in the power. correlated errors

4 R Kass/SP07 P416 Lecture 4 4 Example: The error in the average (“error in the mean”). The average of several measurements each with the same uncertainty (  ) is given by: We can determine the mean better by combining measurements. But the precision only increases as the square root of the number of measurements. n Do not confuse   with  ! n  is related to the width of the pdf (e.g. Gaussian) that the measurements come from. n  does not get smaller as we combine measurements. The problem with the Propagation of Errors technique: In calculating the variance using propagation of errors: We usually assume the error in measured variable (e.g. x) is Gaussian. BUT if x is described by a Gaussian distribution f(x) may not be described by a Gaussian distribution! “error in the mean” It can be shown that if a function is of the form: f(x,y,z)=x a y b z c then the relative variance of f(x,y,z) is: generalization of Eq. 3.18

5 R Kass/SP07 P416 Lecture 4 5 l What does the standard deviation that we calculate from propagation of errors mean? Example: The new distribution is Gaussian. n Let y = Ax, with A = a constant and x a Gaussian variable.  y = A  x and  y = A  x n Let the probability distribution for x be Gaussian: Thus the new probability distribution for y, p(y,  y,  y ), is also described by a Gaussian. 0 20 40 60 80 100 020 40 dN/dy 10 30 y = 2x with x = 10 ± 2  y  =  x  =  y Start with a Gaussian with  = 10,  = 2 Get another Gaussian with  = 20,  = 4

6 R Kass/SP07 P416 Lecture 4 6 u Example: When the new distribution is non-Gaussian: y = 2/x. n The transformed probability distribution function for y does not have the form of a Gaussian. l Unphysical situations can arise if we use the propagation of errors results blindly! u Example: Suppose we measure the volume of a cylinder: V =  R 2 L. n Let R = 1 cm exact, and L = 1.0 ± 0.5 cm. n Using propagation of errors:  V =  R 2  L =  /2 cm 3. V =  ±  /2 cm 3 n If the error on V (  V ) is to be interpreted in the Gaussian sense then there is a finite probability (≈ 3%) that the volume (V) is < 0 since V is only 2  away from 0! Clearly this is unphysical! Care must be taken in interpreting the meaning of  V. 0 20 40 60 80 100 0.10.20.3 dN/dy 0.40.50.6 y = 2/x with x = 10 ± 2  y  =  x  x 2 y Start with a Gaussian with  = 10,  = 2. DO NOT get another Gaussian! Get a pdf with  = 0.2,  = 0.04. This new pdf has longer tails than a Gaussian pdf: Prob(y >  y + 5  y ) = 5x10 -3, for Gaussian  3x10 -7


Download ppt "R Kass/SP07 P416 Lecture 4 1 Propagation of Errors ( Chapter 3, Taylor ) Introduction Example: Suppose we measure the current (I) and resistance (R) of."

Similar presentations


Ads by Google