Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”

Similar presentations


Presentation on theme: "Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”"— Presentation transcript:

1 Parameter Estimation

2 Statistics Probability specified inferred Steam engine pump “prediction” “estimation”

3 Parameter Unknown, (fixed) quantity associated with population (example: the true, unknown mean weight of a population) Statistic Summary of sample measurements from the population (example: average weight of a sample) Estimator Statistic associated with a parameter (e.g., average  mean) Denoted by ^ (“hat”) Estimate Specific value from an estimator, for a given sample Vary from sample to sample

4 Concepts: Bias and Precision Describe the error associated with estimators and estimates

5 Bias, Precision Bias = systematic error relative to true value Variance (precision) = error around the expected value of estimator

6 Attributes of Estimators Expected value E(Y) = ΣY*f(y) or E(Y) = ∫Y*f(y) Weighted average of R.V. over all possible values Mean value of die roll: E(Y) = 1*(1/6) + 2*(1/6) +... + 6*(1/6) = 3.5 Bias: E(Ŷ) – Y Systematic error of the estimator = Difference between expected value of estimator and true value of the parameter Unbiased => E(Ŷ) = Y Variance (precision): var(Ŷ) = E([Ŷ - E(Ŷ)] 2 ) Error (variability) around expected value of estimator = Expected (squared) difference between any single estimate Ŷ and the expected value of the estimator E(Ŷ). It depends on sample size and population variability

7 Accuracy (balance of bias and precision) Mean Square Error MSE = Variance + Bias² MSE(Ŷ) = Var(Ŷ) + Bias(Ŷ)² MSE(Ŷ) = E[(Ŷ - Y)²] MSE = Expected (squared) distance between any point estimate Ŷ and the true parameter value Y. MSE combines both types of errors: bias and variance.

8 Bias, Precision Accurate !

9 Deriving Estimators We assume data is collected under good experimental design (i.e., random sample) Think of data as observations of random variables We have an idea of a reasonable probability distribution that generated the data (‘model’) Approaches for developing estimators of parameters – Least-squares – Method of moments – Maximum likelihood

10 Maximum Likelihood Estimation Model = probability distribution is used as the likelihood function Roles of parameter and variable are switched, use “data” to estimate parameter: L (θ | x) Given a specific model (stat. distribution), what parameter value is the most likely to have produced the data ? Data: fixed and known Likelihood is evaluated over the space of parameter values θ’s Choose that maximizes

11 Probability – given a parameter what is the probability of each value of x MLE – given a value of x, which p has the highest likelihood? maximize (f[7|p]) Coin toss n = 10

12 MLE – given a value of x, which p has the highest likelihood? maximize (f[7|p])

13 MLE of is found by differentiating log L (θ | x) with respect to θ, setting equal to zero, then solving for θ

14 Optimization

15 MLE’s have some nice properties Often match intuition Asymptotically normal Consistent (estimator matches parameter as sample size grows large) For large sample sizes, has smallest variances among unbiased estimates Likelihood also can be used to derive variances and covariances (Fishers information)

16 Confidence Intervals for MLEs

17 Normal-based CI’s MLE is within 1.96 standard deviations of with probability P = 0.95, 95 percent confidence interval

18 Confidence Intervals based on Profile Likelihoods Normal-based CI’s not always valid (small samples) Profile likelihoods provides an another alternative A confidence interval may be obtained by solving for θ 0 chi-square distribution df=1

19

20 Bayesian Inference Inference is based on Pr(θ|x): Posterior Distribution Variance in θ represents our uncertainty in the value of θ Principle Based on Bayes’ Theorem Pr(θ | x) ~ Pr(x|θ) Pr(θ) Prior Knowledge + Insights from Data= Basis of Inference Distribution of parameters Pr(θ): Prior Distribution Updated with information from the data X: How likely is it to obtain data X with parameter value θ: Pr(x|θ) (Likelihood) => Pr(θ|x) (variance of θ shrinks)

21 END

22 Bayes’ Theorem Use for Bayesian Model updating Use for model Bayesian estimation


Download ppt "Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”"

Similar presentations


Ads by Google