Download presentation
Presentation is loading. Please wait.
Published byAugust Roff Modified over 9 years ago
1
Chapter 7. Statistical Estimation and Sampling Distributions
7.1 Point Estimates 7.2 Properties of Point Estimates 7.3 Sampling Distributions 7.4 Constructing Parameter Estimates 7.5 Supplementary Problems
2
7.1 Point Estimates 7.1.1 Parameters
In statistical inference, the term parameter is used to denote a quantity , say, that is a property of an unknown probability distribution. For example, the mean, variance, or a particular quantile of the probability distribution Parameters are unknown, and one of the goals of statistical inference is to estimate them.
3
Figure 7.1 The relationship between a point estimate and an unknown parameter θ
4
Figure 7.2 Estimation of the population mean by the sample mean
5
7.1.2 Statistics Statistics
In statistical inference, the term statistics is used to denote a quantity that is a property of a sample. Statistics are functions of a random sample. For example, the sample mean, sample variance, or a particular sample quantile. Statistics are random variables whose observed values can be calculated from a set of observed data.
6
7.1.3 Estimation Estimation
A procedure of “guessing” properties of the population from which data are collected. A point estimate of an unknown parameter is a statistic that represents a “guess” at the value of . Example 1 (Machine breakdowns) How to estimate P(machine breakdown due to operator misuse) ? Example 2 (Rolling mill scrap) How to estimate the mean and variance of the probability distribution of % scrap ( ) ?
7
7.2 Properties of Point Estimates 7.2.1. Unbiased Estimates (1/5)
Definitions - A point estimate for a parameter is said to be unbiased if - If a point estimate is not unbiased, then its bias is defined to be
8
7.2.1. Unbiased Estimates (2/5)
Point estimate of a success probability -
9
7.2.1. Unbiased Estimates(3/5)
Point estimate of a population mean -
10
7.2.1. Unbiased Estimates(4/5)
Point estimate of a population variance
11
7.2.1. Unbiased Estimates (5/5)
12
7.2.2. Minimum Variance Estimates (1/4)
Which is the better of two unbiased point estimates?
13
7.2.2. Minimum Variance Estimates (2/4)
14
7.2.2. Minimum Variance Estimates (3/4)
An unbiased point estimate whose variance is smaller than any other unbiased point estimate: minimum variance unbised estimate (MVUE) Relative efficiency Mean squared error (MSE) How is it decomposed ? Why is it useful ?
15
7.2.2. Minimum Variance Estimates (4/4)
16
Example: two independent measurements
Point estimates of the unknown C They are both unbiased estimates since The relative efficiency of to is Let us consider a new estimate Then, this estimate is unbiased since What is the optimal value of p that results in having the smallest possible mean square error (MSE)?
17
Let the variance of be given by
Differentiating with respect to p yields that The value of p that minimizes Therefore, in this example, The variance of
18
The relative efficiency of to is
In general, assuming that we have n independent and unbiased estimates having variance respectively for a parameter , we can set the unbiased estimator as The variance of this estimator is
19
Mean square error (MSE):
Let us consider a point estimate Then, the mean square error is defined by Moreover, notice that
20
7.3 Sampling Distribution 7.3.1 Sample Proportion (1/2)
21
7.3.1 Sample Proportion (2/2) Standard error of the sample mean
22
7.3.2 Sample Mean (1/3) Distribution of Sample Mean
23
7.3.2 Sample Mean (2/3) Standard error of the sample mean
24
7.3.2 Sample Mean (3/3)
25
7.3.3 Sample Variance (1/2) Distribution of Sample Variance
26
Theorem: if is a sample from a normal population having mean and variance , then are independent random variables, with being normal with mean and variance and being chi-square with n-1 degrees of freedom. (proof) Let Then, or equivalently, Dividing this equation by we get Cf. Let X and Y be independent chi-square random variables with m and n degrees of freedom respectively. Then, Z=X+Y is a chi-square random variable with m+n degrees of freedom.
27
In the previous equation,
Therefore,
28
7.3.3 Sample Variance (2/2) t-statistics
29
7.4 Constructing Parameter Estimates 7.4.1 The Method of Moments (1/3)
Method of moments point estimate for One Parameter
30
7.4.1 The Method of Moments (2/3)
Method of moments point estimates for Two Parameters
31
7.4.1 The Method of Moments (3/3)
Examples - What if the distribution is exponential with the parameter ?
32
7.4.2 Maximum Likelihood Estimates (1/4)
Maximum Likelihood Estimate for One Parameter
33
7.4.2 Maximum Likelihood Estimates (2/4)
Example -
34
7.4.2 Maximum Likelihood Estimates (3/4)
Maximum Likelihood Estimate for Two Parameters
35
7.4.2 Maximum Likelihood Estimates (4/4)
Example
36
7.4.3 Examples (1/6) Glass Sheet Flaws - The method of moment
37
7.4.3 Examples (2/6) The maximum likelihood estimate:
38
7.4.3 Examples (3/6) Example 26: Fish Tagging and Recapture
39
7.4.3 Examples (4/6)
40
7.4.3 Examples (5/6) Example 36: Bee Colonies
41
7.4.3 Examples (6/6)
42
MLE for For some distribution, the MLE may not be found by differentiation. You have to look at the curve of the likelihood function itself.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.