Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Error Analysis

Similar presentations


Presentation on theme: "Introduction to Error Analysis"— Presentation transcript:

1 Introduction to Error Analysis
Atomic Lab Introduction to Error Analysis

2 Significant Figures For any quantity, x, the best measurement of x is defined as xbest ±x In an introductory lab, x is rounded to 1 significant figure Example: x= > x=0.02 g= 9.82 ± 0.02 Right and Wrong Wrong: speed of sound= ± 10 m/s Right: speed of sound= 330 ± 10 m/s Always keep significant figures throughout calculation, otherwise rounding errors introduced

3 Statistically the Same
Student A = 30 ± 2 Student B = 34 ± 5 Since the uncertainties for A & B overlap, these numbers are statistically the same

4 Precision Mathematical Definition
Precision of speed of sound= 10/330 ~ 0.33 or 33% So often we write: speed of sound= 330 ± 33%

5 Propagation of Uncertainties– Sums & Differences
Suppose that x, …, w are independent measurements with uncertainties x, …, w and you need to calculate q= x+…+z-(u+….+w) If the uncertainties are independent i.e. w is not sum function of x etc then Note: q < x+…+ z+ u+…+ w

6 Propagation of Uncertainties– Products and Quotients
Suppose that x, …, w are independent measurements with uncertainties x, …, w and you need to calculate If the uncertainties are independent i.e. w is not sum function of x etc then

7 Functions of 1 Variable Suppose = 20 ± 3 deg and want to find cos 
3 deg is 0.05 rad |(d(cos)/d|=| -sin|= sin (cos)= sin*  = sin(20o)*(0.05) (cos 20o) = 0.02 rad and cos 20o= 0.94 So cos= 0.94 ± 0.02

8 Power Law Suppose q= xn and x ± x

9 Types of Errors Measure the period of a revolution of a wheel
As we repeat measurements some will be more or some less These are called “random errors” In this case, caused by reaction time

10 What if the clock is slow?
We would never know if our clock is slow; we would have to compare to another clock This is a “systematic error” In some cases, there is not a clear difference between random and systematic errors Consider parallax: Move head around: random error Keep head in 1 place: systematic

11 Mean (or average)

12 Deviation Need to calculate an average or “standard” deviation
To eliminate the possibility of a zero deviation, we square di When you divide by N-1, it is called the population standard deviation If dividing by N, the sample standard deviation

13 Standard Deviation of the Mean
The uncertainty in the best measurement is given by the standard deviation of the mean (SDOM) If the xbest = the mean, then sbest =smean

14 Histograms Number of times that value has occurred Value

15 Distribution of a Craps Game
Bell Curve Or Normal Distribution

16 Bell Curve Centroid or Mean x+s x-s 68 %
Between x-2s to x+2s, 95% of population 2s is usually defined as Error

17 Gaussian X0 In the Gaussian, x0 is the mean and sx is the standard deviation. They are mathematically equivalent to formulae shown earlier

18 Error and Uncertainty While definitions vary between scientists, most would agree to the following definitions Uncertainty of measurement is the value of the standard deviation (1 s) Error of the measurement is the value of two times the standard deviation (2 s)

19 Full Width at Half Maximum
A special quantity is the full width at half maximum (FWHM) The FWHM is measured by taking ½ of the maximum value (usually at the centroid) The width of distribution is measured from the left side of the centroid at the point where the frequency is this half value It is measured to the corresponding value on the right side of the centroid. Mathematically, the FWHM is related to the standard deviation by FWHM=2.354*sx

20 Weighted Average Suppose each measurement has a unique uncertainty such as x1 ± s1 x2 ± s2 xN ± sN What is the best value of x?

21 We need to construct statistical weights
We desire that measurements with small errors have the largest influence and the ones with the largest errors have very little influence Let w=weight= 1/si2 This formula can be used to determine the centroid of a Gaussian where the weights are the values of the frequency for each measurement

22 Least Squares Fitting What if you want to fit a straight line through your data? In other words, yi = A*xi + B First, you need to calculate residuals Residual= Data – Fit or Residual= yi – (A*xi+B) When as the Fit approaches the Data, the residuals should be very small (or zero).

23 Big Problem Some residuals >0 Some residuals <0
If there is no bias, then rj = -rk and then rj + rk =0 The way to correct this is to square rj and rk and then the sum of the squares is positive and greater than 0

24 Chi-square, c2 We need to minimize this function with respect to A and B so We take the partial derivative of w.r.t. these variables and set the resulting derivatives equal to 0

25 Chi-square, c2

26 Using Determinants

27 A Pseudocode Dim x(100), y(100) xsum=0 x2sum=0 Xysum=0 N=100 Ysum=0
For i=1 to 100 xsum=xsum+x(i) ysum=ysum+y(i) xysum=xysum+x(i)*y(i) x2sum=x2sum+x(i)*x(i) Next I Delta= N*x2sum-(xsum*xsum) A=(N*xysum-xsum*ysum)/Delta B=(x2sum*ysum-xsum*xysum)/Delta

28 c2 Values If calculated properly, c2 start at large values and approach 1 This is because the residual at a given point should approach the value of the uncertainty Your best fit is the values of A and B which give the lowest c2 What if c2 is less than 1?! Your solution is over determined i.e. a larger number of degrees of freedom than the number of data points Now you must change A and B until the c2 doesn’t vary too much

29 Without Proof

30 Extending the Method Obviously, can be expanded to larger polynomials i.e. Becomes a matrix inversion problem Exponential Functions Linearize by taking logarithm Solve as straight line

31 Extending the Method Power Law Multivariate multiplicative function

32 Uglier Functions q=f(x,y,z) Use a gradient search method
Gradient is a vector which points in the direction of steepest ascent f = a direction So follow f until it hits a minimum

33 Correlation Coefficient, r2
r2 starts at 0 and approaches 1 as fit gets better r2 shows the correlation of x and y … i.e. is y=f(x)? If r2 <0.5 then there is no correlation.


Download ppt "Introduction to Error Analysis"

Similar presentations


Ads by Google