Presentation is loading. Please wait.

Presentation is loading. Please wait.

Version 2012 Updated on 030212 Copyright © All rights reserved Dong-Sun Lee, Prof., Ph.D. Chemistry, Seoul Women’s University Chapter 6 Random Errors in.

Similar presentations


Presentation on theme: "Version 2012 Updated on 030212 Copyright © All rights reserved Dong-Sun Lee, Prof., Ph.D. Chemistry, Seoul Women’s University Chapter 6 Random Errors in."— Presentation transcript:

1 Version 2012 Updated on 030212 Copyright © All rights reserved Dong-Sun Lee, Prof., Ph.D. Chemistry, Seoul Women’s University Chapter 6 Random Errors in Chemical Analyses

2 Definition of Statistics 1. A collection of data or numbers. 2. Logic which makes use of mathematics in the science of collecting, analyzing and interpreting data for the purpose of making decision.

3 Red blood cells (erythrocytes, Er) tangled in fibrin threads (Fi) in a blood clot. Stacks of erythrocytes in a clot are called a rouleaux formation (Ro). RBC count (normal) 5.0  10 6 cells /  L A glucose analyzer. Normal value : 70 ~ 120 mg/dL

4 Three-dimensional plot showing absolute error in Kjedahl nitrogen determination for four different analysts. The results of analyst 1 are both precise and accurate.

5 Frequency distribution(probability) of data The results of a series of independent trials, random error often occur in fairly regular and predictable patterns. These patterns can be expressed mathematically, and the most important are normal, binominal, and Poisson distribution. The normal curve was developed mathematically in 1733 by DeMoivre as an approximation to the binomial distribution. His paper was not discovered until 1924 by Karl Pearson. Laplace used the normal curve in 1783 to describe the distribution of errors. Subsequently, Gauss used the normal curve to analyze astronomical data in 1809. The normal curve is often called the Gaussian distribution. The term bell- shaped curve is often used in everyday usage. http://www.stat.wvu.edu/SRS/Modules/Normal/normal.html http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html 1) Gaussian normal distribution General equation :

6 Histogram of Gaussian normal distribution. The histogram above illustrates well the concept of the normal curve. Note the symmetry of the graph. Data values at the low and high extremes occur infrequently. As data values move toward the mean, the frequency increases. Note the vertical bar in the middle of the histogram. That interval represents the mode. Not so apparent is the fact that it also represents the median and mean.

7 Characteristics of Gaussian normal curve 1> Bell shaped curve :The normal curve is symmetric around the mean, with only one mode. 2> Maximum frequency in occurrence of zero indeterminate error. A normal distribution where  =0 and  2 = 1 is called a standard normal distribution. 3> Few very large error 4> Symmetric and equal number of positive and negative errors 5> Exponential decrease in frequency as the magnitude of the error increases. 6>This implies that the mean, mode, and median are all the same for the normal distribution. 7> Approximately 2/3 of the probability density is within 1 standard deviation of the mean (  ±  ). 8> Approximately 95% of the probability density is within 2 standard deviations of the mean (  ± 2  ). http://www.bcu.ubc.ca/~whitlock/bio300/LectureNotes/Distributions/Distributions.html

8 Area under the Curve and Z-score The area under the normal curve indicates the proportion of observations obtaining a score equivalent to the z-score on the x-axis. For example, 34% of the sample scored between the mean (z=0) and one standard deviation above the mean (z=1). The characteristics of the normal curve make it useful to calculate z-scores, an index of the distance from the mean in units of standard deviations. z-score = (score - mean) / standard deviation = (x –  ) /  Gaussian normal distribution curve:  1  = 68.26%,   2  = 95.56%,   3  =99.74%,   4  = 100% http://research.med.umkc.edu/tlwbiostats/curve10.html

9

10 Normal error curves. (a)The abscissa is the deviation from the mean in the units of measurement. (b)The abscissa is the deviation from the mean in units of .

11 Gaussian curves for two sets of light bulbs, one having a standard deviation half as great as the other. The number of bulbs described by each curve is the same. An experiment that produces a small standard deviation is more precise than one that produces a large standard deviation. Bar graph and Gaussian curve describing the lifetime of hypothetical set of electric light bulbs.

12 A Gaussian curve in which  = 0 and  = 1. A Gaussian curve whose area is unity is called a normal error curve. In this case, the abscissa, x, is equal to z, defined as z = (x–  ) / . This sample mean is an estimate of , the actual mean of the population. The mean gives the center of the distribution. The population standard deviation , which is a measure of the precision of a population of data, is given by the equation  =  [  (x i   ) 2 ] / N Where N is the number of data points making up the population.

13 Area from 900 to 1000 = (area from – ∞ to 1000) – (area from – ∞ to 900) = 0.949841 – 0.719629 = 0.230211

14 Variance The variance is the square of the standard deviation. V = s 2 Coefficient of variation ; measure of precision Relative standard deviation ; R.S.D = s / x Coefficient of variation ; CV(%) = ( s / x ) ×100% S pooled  (x i –x 1 ) 2 +  (x j –x 2 ) 2 +  (x k –x 3 ) 2 + … n 1 + n 2 + n 3 +… – N t The term N t is the total number of data sets that are pooled.

15

16

17 Propagation of uncertainty 1) Addition and subtraction 1.76  0.03  e 1  1.89  0.02  e 2  0.59  0.03  e 3 3.06  e 4 e 4 =  e 1 2 + e 2 2 + e 3 2 =  (0.03) 2 +(0.02) 2 +(0.03) 2 = 0.04 1 Percent relative error = 0.04 1  100 / 3.06 = 1. 3 % Absolute error 3.06  0.04 Relative error 3.06  1%

18 2) Multiplication and division ; Error of the product or quotient %e 4 =  (%e 1 ) 2 + (%e 2 ) 2 + (%e 3 ) 2 Ex. {1.76(  0.03)  1.89 (  0.02)}  {0.59 (  0.02)} = 5.6 4  ? 1> absolute error  %relative error {1.76(  1. 7 %)  1.89 (  1. 1 %)}  {0.59 (  3. 4 %)} = 5.6 4  ? 2> %e 4 =  (%e 1 ) 2 + (%e 2 ) 2 + (%e 3 ) 2 =  (1. 7 ) 2 + (1. 1 ) 2 + (3. 4 ) 2 = 4. 0 % result : 5.6(  4%) 3> % relative error  absolute error 4. 0 %  5.6 4 = 0.2 3 result : 5.6 (  0.2)

19 3) Mixed operations {1.76(  0.03)  0.59(  0.02)}  {1.89 (  0.02)} = 0.619 0  ? 1> {1.76(  0.03)  0.59(  0.02)} = 1.17  0.03 6 2> absolute error  %relative error 1.17 (  0.03 6 )  1.89 (  0.02) = 1.17 (  3. 1 %)  1.89 (  1. 1 %) = 0.619 0 (  3. 3 %) Result : 0.62(  3%) 3> 0.619 0  3. 3 % = 0.020 Result : 0.62(  0.02)

20 Significant figures The number of significant figures is the number of digits needed to write a given value in scientific notation without loss of accuracy. 8.25 × 10 4 3 significant figures 8.250 × 10 4 4 8.2500 × 10 4 5 0.801 3 0.0801 3 0.8010 4

21 Rules for determining the number of significant figures: 1> Discard all initial zeros 2> Disregard all final zeros unless they follow a decimal point 3> All remaining digits, including zeros between nonzero digits, are significant

22

23 The scale of Spectronic 20 spectrophotometer. Graphs demonstrating choice of rulings in relation to significant figures in the data.

24 Experimental potentiometric titration curve: 0.1 N NaOH vs 0.0686 N PHP(25ml)

25 Example of a graph intended to show the qualitative behavior of the function y = e –x/6 cos x. Calibration curve for a 50 mL buret.

26 Summary Normal distribution curve Gauss curve z - score Variance Coefficient of variation significant figures

27 Q & A Thanks. Dong-Sun Lee / 분석화학연구실 (CAT) / SWU.


Download ppt "Version 2012 Updated on 030212 Copyright © All rights reserved Dong-Sun Lee, Prof., Ph.D. Chemistry, Seoul Women’s University Chapter 6 Random Errors in."

Similar presentations


Ads by Google