Download presentation

Presentation is loading. Please wait.

Published byDamion Hayne Modified over 2 years ago

1
week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know its value. A statistic is a function of the sample data, i.e., it is a quantity whose value can be calculated from the sample data. It is a random variable with a distribution function. Statistics are used to make inference about unknown population parameters. The random variables X 1, X 2,…, X n are said to form a (simple) random sample of size n if the X i ’s are independent random variables and each X i has the sample probability distribution. We say that the X i ’s are iid.

2
week12 Example – Sample Mean and Variance Suppose X 1, X 2,…, X n is a random sample of size n from a population with mean μ and variance σ 2. The sample mean is defined as The sample variance is defined as

3
week13 Goals of Statistics Estimate unknown parameters μ and σ 2. Measure errors of these estimates. Test whether sample gives evidence that parameters are (or are not) equal to a certain value.

4
week14 Sampling Distribution of a Statistic The sampling distribution of a statistic is the distribution of values taken by the statistic in all possible samples of the same size from the same population. The distribution function of a statistic is NOT the same as the distribution of the original population that generated the original sample. The form of the theoretical sampling distribution of a statistic will depend upon the distribution of the observable random variables in the sample.

5
week15 Sampling from Normal population Often we assume the random sample X 1, X 2,…X n is from a normal population with unknown mean μ and variance σ 2. Suppose we are interested in estimating μ and testing whether it is equal to a certain value. For this we need to know the probability distribution of the estimator of μ.

6
week16 Claim Suppose X 1, X 2,…X n are i.i.d normal random variables with unknown mean μ and variance σ 2 then Proof:

7
week17 Recall - The Chi Square distribution If Z ~ N(0,1) then, X = Z 2 has a Chi-Square distribution with parameter 1, i.e., Can proof this using change of variable theorem for univariate random variables. The moment generating function of X is If, all independent then Proof…

8
week18 Claim Suppose X 1, X 2,…X n are i.i.d normal random variables with mean μ and variance σ 2. Then, are independent standard normal variables, where i = 1, 2, …, n and Proof: …

9
week19 t distribution Suppose Z ~ N(0,1) independent of X ~ χ 2 (n). Then, Proof:

10
week110 Claim Suppose X 1, X 2,…X n are i.i.d normal random variables with mean μ and variance σ 2. Then, Proof:

11
week111 F distribution Suppose X ~ χ 2 (n) independent of Y ~ χ 2 (m). Then,

12
week112 Properties of the F distribution The F-distribution is a right skewed distribution. i.e. Can use Table 7 on page 796 to find percentile of the F- distribution. Example…

13
week113 The Central Limit Theorem Let X 1, X 2,…be a sequence of i.i.d random variables with E(X i ) = μ < ∞ and Var(X i ) = σ 2 < ∞. Let Then, for - ∞ < x < ∞ where Z is a standard normal random variable and Ф(z)is the cdf for the standard normal distribution. This is equivalent to saying that converges in distribution to Z ~ N(0,1). Also, i.e. converges in distribution to Z ~ N(0,1).

14
week114 Example Suppose X 1, X 2,…are i.i.d random variables and each has the Poisson(3) distribution. So E(X i ) = V(X i ) = 3. The CLT says that as n ∞.

15
week115 Examples A very common application of the CLT is the Normal approximation to the Binomial distribution. Suppose X 1, X 2,…are i.i.d random variables and each has the Bernoulli(p) distribution. So E(X i ) = p and V(X i ) = p(1- p). The CLT says that as n ∞. Let Y n = X 1 + … + X n then Y n has a Binomial(n, p) distribution. So for large n, Suppose we flip a biased coin 1000 times and the probability of heads on any one toss is 0.6. Find the probability of getting at least 550 heads. Suppose we toss a coin 100 times and observed 60 heads. Is the coin fair?

Similar presentations

OK

Chapter 7: Sample Variability Empirical Distribution of Sample Means.

Chapter 7: Sample Variability Empirical Distribution of Sample Means.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on democracy class 9 Ppt on preparation of soap Ppt on edge detection in matlab Ppt on earthquake in nepal Ppt on index numbers economics Ppt on do's and don'ts of group discussion Ppt on major domains of the earth for class 6 Ppt on content development manager Ppt on disk formatting types Ppt on south african economy