Download presentation
Presentation is loading. Please wait.
Published byCaroline Hoover Modified over 9 years ago
1
Standard error of estimate & Confidence interval
2
Two results of probability theory Central limit theorem Sum of random variables tends to be normally distributed as the number of variables increases Law of large numbers Larger sample size -> the relative frequency in the sample approaches that of a population -> the sample average is closer to population mean
3
Calculating expected values and variances x: random variable k: constant E(x)=expected value of x V(x)=variance of x E(x+x)=E(x)+E(x) V(x+x)=V(x)+V(x) (if independent) E(k*x)=k*E(x) V(k*x)=k 2 V(x) V(x/k)=V(x)/ k 2
4
Standard error of an estimator Before knowing the value: “Standard deviation of the estimates in repeated sampling IF the true value of the parameter was known” After knowing the observed value: “Standard deviation of the estimates in repeated sampling IF the true value of the parameter is the observed one” Not a statement of uncertainty about the parameter, but a statement of uncertainty about the hypothetical values of the estimator
5
Confidence interval 95% CI: Intervals calculated like this one include the true value of the parameter in 95% of the cases within infinitely repeated sampling Interval is random, it depends on the randomly sampled data Wrong interpretation: “The true value of the parameter lies in this interval with probability 0.95”
6
95% Confidence interval for the mean Interval that contains the true mean in 95% of the cases in infinitely repeated sampling Sample averages are approximately normally distributed Assume known standard deviation of the population:
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.