Download presentation

Presentation is loading. Please wait.

Published byJessie Went Modified over 2 years ago

1
Estimation Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population (i.e. , ) Parameter – numerical characteristic of the population (i.e. , ) Distributional characteristics – pdf and cdf Distributional characteristics – pdf and cdf Statistic - numerical characteristic of the sample used to estimate parameters Statistic - numerical characteristic of the sample used to estimate parameters

2
Point Estimation A sample statistic is often used to estimate and draw conclusions about a population parameter ( ). The sample statistic is called a point estimator of The sample statistic is called a point estimator of For a particular sample, the calculated value of the statistic is called a point estimate of . For a particular sample, the calculated value of the statistic is called a point estimate of .

3
Point Estimation Let X 1, X 2, …, X n be a random sample of size n from the population of interest, and let Y=u(X 1, X 2, …, X n ) be a statistic used to estimate . Then Y is called an estimator of . Then Y is called an estimator of . A specific value of the estimator y=u(x 1, x 2, …, x n ) is called an estimate of . A specific value of the estimator y=u(x 1, x 2, …, x n ) is called an estimate of .

4
Estimator Discussion is limited to random variables with functional form of the pdf known. The pdf typically depends on an unknown parameter which can take on any value in the parameter space . i.e. f(x; ) The pdf typically depends on an unknown parameter which can take on any value in the parameter space . i.e. f(x; ) It is often necessary to pick one member from a family of members as most likely to be true. It is often necessary to pick one member from a family of members as most likely to be true. i.e. pick “best” value of for f(x; )i.e. pick “best” value of for f(x; ) The best estimator can depend on the distribution being sampled. The best estimator can depend on the distribution being sampled.

5
Properties of an Estimator If E[Y]= , then the statistic Y is called an unbiased estimator of . Otherwise, it is said to be biased. E[Y- ] is the bias of an estimator E[Y- ] is the bias of an estimator In many cases, the “best” estimator is an unbiased estimator. In many cases, the “best” estimator is an unbiased estimator.

6
Properties of an Estimator Another important property is small variance. If two estimators are both unbiased, we prefer the one with small variance. Minimize E[(Y- ) 2 ] = Var[(Y- )] + E[(Y- )] 2 Minimize E[(Y- ) 2 ] = Var[(Y- )] + E[(Y- )] 2 The estimator Y that minimizes E[(Y- ) 2 ] is said to have minimum mean square error (MSE) The estimator Y that minimizes E[(Y- ) 2 ] is said to have minimum mean square error (MSE) If we consider only unbiased estimators, the statistic Y that minimizes MSE is called the minimum variance unbiased estimator (MVUE) If we consider only unbiased estimators, the statistic Y that minimizes MSE is called the minimum variance unbiased estimator (MVUE)

7
Properties of an Estimator The efficiency of an estimator 1 compare to another estimator 2 is equal to the ratio

8
Method of Maximum Likelihood An important method for finding an estimator Let X 1,X 2,…,X n be a random sample of size n from f(x; ). Let X 1,X 2,…,X n be a random sample of size n from f(x; ). The likelihood function is the joint pdf of X 1,X 2,…,X n evaluated at observed values x 1,x 2,…,x n as a function of the parameter of interest. The likelihood function is the joint pdf of X 1,X 2,…,X n evaluated at observed values x 1,x 2,…,x n as a function of the parameter of interest. L( ) = f(x 1, x 2, …, x n ; ) = f(x 1, ) f(x 2, ) f(x n, ) is the probability of observing x 1,x 2,…,x n if the pdf is f(x; ).L( ) = f(x 1, x 2, …, x n ; ) = f(x 1, ) f(x 2, ) f(x n, ) is the probability of observing x 1,x 2,…,x n if the pdf is f(x; ). The value of that maximizes L( ) is the value of most likely to have produced x 1,x 2,…,x nThe value of that maximizes L( ) is the value of most likely to have produced x 1,x 2,…,x n

9
Maximum Likelihood Estimator The maximum likelihood estimator (MLE) of is found by setting the differential of L( ) with respect to equal to zero and solving for . The MLE can also be found by maximizing the natural log of L( ), which is often easier to differentiate. For more than one parameter, maximum likelihood equations are formed and simultaneously solved to arrive at the MLE’s

10
Invariance Property If t is the MLE for and u( ) is a function of then the MLE of u( ) is u(t). Plug the MLE(s) into the function to get an MLE estimate of the function Plug the MLE(s) into the function to get an MLE estimate of the function

11
Method of Moments An important method for finding an estimator Let X 1,X 2,…,X n be a random sample of size n from f(x). The kth population moment is E[X k ]. The kth sample moment is (1/n) X k. Let X 1,X 2,…,X n be a random sample of size n from f(x). The kth population moment is E[X k ]. The kth sample moment is (1/n) X k. The method of moments involves setting the sample moment(s) equal to the population moment(s) and solving for the parameter of interest. The method of moments involves setting the sample moment(s) equal to the population moment(s) and solving for the parameter of interest.

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google