 # Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.

## Presentation on theme: "Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function."— Presentation transcript:

Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function

How can we specify a distribution from the data? Two steps procedure: 1.Decide which family to use (Normal, Log-normal, exponential …, etc. This step is done by: - Guess the family by looking at the observations. - Use the Chi-square goodness-of-fit test to test our guess. 2. Decide which member of the chosen family to use. This means specify the values of the parameters. This is done by producing estimates of the parameters based on the observations in the sample.

Estimation Estimation has to do with the second step. 2. Decide which member of the chosen family to use. This means specify the values of the parameters.

General Concept of Modelling

Point Estimates A point estimate of an unknown parameter is a number which to the best of our knowledge represents the parameter-value. Each random sample can give an estimator. So, the estimator is regarded as a random variable. A good estimator has the following: 1.It gives a good result. Not always too big or always too small. 1.Unbiased. The expected value of the estimator should be equal to the true value of the parameter. 3. The variance is small.

Unbiased Estimators

Method of Moments

Method of Moments (Cont.)

Mean of the Means

Standard Deviation of the Means

Confidence Intervals When we estimate a parameter from a sample the estimation can be different from different samples. It would be better to indicate reliability of the estimate. This can be done by giving the confidence of the result.

Confidence Intervals for the mean E

Confidence Interval For the Mean (cont.) A general expression for a 100(1-  )% confidence interval for the mean is given by : E

Confidence Interval For the Mean (Cont.) According to the above formula we have 90% 95% 98% 99% These formulae apply for any population as long as the sample size is sufficiently large for the central limit theorem to hold

Statistical Inference for The population Variance For normal populations statistical inference procedures are available for the population variance The sample variance S 2 is an unbiased estimator of  2 We assume we have a random sample of n observations from a normal population with unknown variance  2.

The Chi Square Distribution If the population is Normal with variance  2 then the statistic Has a Chi Square distribution with (n-1) degrees of freedom

Confidence Region For The Variance Using this result a confidence interval for  2 is given by the interval:

Confidence Region For The Variance (n-1) d.f.

Estimation of the Confidence Intervals of the variance

Fitting a Distribution Function

A goodness-of-fit test is an inferential procedure used to determine whether a frequency distribution follows a claimed distribution. Goodness-of-Fit Test

Hypothesis Testing Hypothesis: –A statement which can be proven false Null hypothesis Ho: –“There is no difference” Alternative hypothesis (H1): –“There is a difference…” In statistical testing, we try to “reject the null hypothesis” –If the null hypothesis is false, it is likely that our alternative hypothesis is true –“False” – there is only a small probability that the results we observed could have occurred by chance

Application of Testing hypothesis on Goodness of Fit Testing Hypothesis: Ho: the null hypothesis is defined as the distribution function is a good fit to the empirical distribution. H1: the alternative hypothesis is defined as the distribution function is not a good fit to the empirical distribution. Testing of hypothesis is a procedure for deciding whether to accept or reject the hypothesis. The Chi-squared test can be used to test if the fit is satisfactory.

Testing Goodness of Fit of a Distribution Function to an Empirical Distribution Unknown real situation Decision Accept HoReject Ho Ho is True Correct Decision Type I Error Probability  Ho is False Type II Error Probability  Correct Decision

Common Values for Significant Levels

1.It is not symmetric. 2.The shape of the chi-square distribution depends upon the degrees of freedom. 3.As the number of degrees of freedom increases, the chi-square distribution becomes more symmetric as is illustrated in the figure. 4. The values are non-negative. That is, the values of are greater than or equal to 0. The Chi-Square Distribution

Chi 2 Degrees of Freedom All statistical tests require the compotation of degrees of freedom Chi 2 df = (No. classes -1)

Critical Values of Chi 2 Significance Level df0.100.050.250.010.005 1 2.70553.84155.02396.63497.8794 2 4.60625.99157.37789.210410.5965 3 6.25147.81479.348411.344912.8381

Chi-Square Table

Step 1: A claim is made regarding a distribution. The claim is used to determine the null and alternative hypothesis. H o : the random variable follows the claimed distribution H 1 : the random variable does not follow the claimed distribution Procedure for Chi Square Test

Step 2: Calculate the expected frequencies for each of the k classes. The expected frequencies are E i, i = 1, 2, …, k assuming the null hypothesis is true. Procedure for Chi Square Test (cont.)

Step 3: Verify the requirements fort he goodness-of-fit test are satisfied. (1) all expected frequencies are greater than or equal to 1 (all E i > 1) (2) no more than 20% of the expected frequencies are less than 5. Procedure for Chi Square Test (cont.)

Example 1 (Discrete Variable)

Example 1 (cont.) Observed Frequency –The obtained frequency for each category.

Example 1 (cont.) State the research hypothesis. –Is the rat’s behavior random? State the statistical hypotheses.

Example 1 (cont.).25 If picked by chance.

Example 1 (cont.) Expected Frequency –The hypothesized frequency for each distribution, given the null hypothesis is true. –Expected proportion multiplied by number of observations.

Example 1 (cont.) Set the decision rule.

Example 1 (cont.) Set the decision rule. –Degrees of Freedom Number of Categories -1 (C) –1

Example 1 (cont.) Set the decision rule.

Example 1 (cont.) Calculate the test statistic.

Example 1 (cont.) Calculate the test statistic.

Example 1 (cont.) Decide if your result is significant. –Reject H0, 9.25>7.81 Interpret your results. –The rat’s behavior was not random. 7.81 22 22 Do Not Reject H 0 Reject H 0

Example 2 (Continuous Variable) Class number Class interval Class Mark Observed frequency Expected frequency 10.00-1.000.51420 21.00-2.001.51820 32.00-3.002.52620 43.00-4.003.51820 54.00-5.004.520 65.00-6.005.51820 76.00-7.006.52420 87.00-8.007.52220 98.00-9.008.51620 109.00-10.09.52420 Observed frequency of 10 size classes of shale thicknesses,

Example (Cont.)

Chi 2 Graph 16.92

Exercise In exercise 1 test the hypothesises that the distribution is Normal For  =0.05.

Other Statistical Tests The Chi2 and Independent T-test are very useful. A Variety of other tests are available for other research designs. Parametric Examples Follow T-tests are used to compare 2 groups F tests (Analyses of Variance tests )are used to compare more than 2 groups.

Common Statistical Tests QuestionTest Does a single observation belong to a population of values? Z-test (Standard Normal Dist) Are two (or more populations) of number different? T-test F-test (ANOVA) Is there a relationship between x and yRegression Is there a trend in the data (special case of aboveRegression

SPSS and Computer Applications Most actual analysis is done by computer Simple test are easily done in Excel Sophisticated programs (such as SPSS) are used for more complicated designs

Download ppt "Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function."

Similar presentations