Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Statistical Inference: Estimation and Hypothesis Testing chapter.

Similar presentations


Presentation on theme: "Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Statistical Inference: Estimation and Hypothesis Testing chapter."— Presentation transcript:

1 Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Statistical Inference: Estimation and Hypothesis Testing chapter five

2 5-2 Statistical Inference Drawing conclusions about a population based on a random sample from that population Consider Table D-1(5-1): Can we use the average P/E ratio of the 28 companies shown as an estimate of the average P/E ratio of the 3000 or so stocks on the NYSE? If X = P/E ratio of a stock and Xbar the average P/E of the 28 stocks, can we tell what the expected P/E ratio, E(X), is for the whole NYSE?

3 5-3 Table D-1 (5-1) Price to Earnings (P/E) ratios of 28 companies on the New York Stock Exchange (NYSE).

4 5-4 Estimation Is the First Step The average P/E from a random sample of stocks, Xbar, is an estimator (or sample statistic) of the population average P/E, E(X), called the population parameter. The mean and variance are parameters of the normal distribution A particular value of an estimator is called an estimate, say Xbar = 23. Estimation is the first step in statistical inference.

5 5-5 How good is the estimate? If we compute Xbar for each of two or more random samples, the estimates likely will not be the same. The variation in estimates from sample to sample is called sampling variation or sampling error. The error is not deliberate, but inherent in a random sample as the elements included in the sample will vary from sample to sample. What are the characteristics of good estimators?

6 5-6 Hypothesis Testing Suppose expert opinion tells us that the expected P/E of the NYSE is 20, even though our sample Xbar is 23. Is 23 close to the hypothesized value of 20? Is 23 statistically different from 20? Statistically, could 23 be not that different from 20? Hypothesis testing is the method by which we can answer such questions as these.

7 5-7 Estimation of Parameters Point estimate Xbar = 23.25 from Table D-1 (5-1) is a point estimate of μ X the population parameter The formula Xbar = ∑X i /n is the point estimator or statistic, a r.v. whose value varies from sample to sample Interval estimate Is it better to say that the interval from 19 to 24 most likely includes the true μ X, even though Xbar = 23.25 is our best guess of the value of μ X ?

8 5-8 Interval Estimates If X ~ N(μ X, σ 2 X ), then the sample mean Xbar ~ N(μ X, σ 2 X /n ) for a random sample Or Z = (Xbar- μ X )/(σ X /√n) ~ N(0, 1) And for unknown σ X 2, t = (Xbar-μ X )/(S X /√n) ~ t (n-1) Even if X is not normal, Xbar will be for large n We can construct an interval for μ X using the t distribution with n-1 = 27 d.f. from Table E-2 (A-2) P(-2.052 < t < 2.052) = 0.95

9 5-9 Interval Estimates The t values defining this interval (-2.052, 2.052) are the critical t values t = -2.052 is the lower critical t value t = 2.052 is the upper critical t value See Fig. D-1 (5-1). By substitution, we can get P(-2.052 < (Xbar-μ X )/(S X /√n) < 2.052), OR P(Xbar-2.052(S X /√n) < μ X < Xbar+2.052(S X /√n)) = 0.95 An interval estimator of μ X for a confidence interval of 95% or confidence coefficient of 0.95 0.95 is the probability that the random interval contains the true μ X

10 5-10 Figure D-1 (5-1) The t distribution for 27 d.f.

11 5-11 Interval Estimator The interval is random because Xbar and S X /√n vary from sample to sample The true but unknown μ X is some fixed number and is not random DO NOT SAY: that μ X lies in this interval with probability 0.95 SAY: there is a 0.95 probability that the (random) interval contains the true μ X

12 5-12 Example For the P/E example 23.25 – 2.052(9.49/√28) < μ X < 23.25 + 2.052(9.49/√28) Or 19.57 < μ X < 26.93 (approx.) as the 95% confidence interval for μ X This says, if we were to construct such intervals 100 times, then 95 out of 100 intervals would contain the true μ X

13 5-13 Figure D-2 (5-2) (a) 95% and (b) 99% confidence intervals for μ x for 27 d.f.

14 5-14 In General From a random sample of n values X 1, X 2,…, X n, compute the estimators L and U such that P(L < μ X < U) = 1 – α The probability is (1 – α) that the random interval from L to U contains the true μ X 1- α is the confidence coefficient and α is the level of significance or the probability of committing a type I error Both may be multiplied by 100 and expressed as a percent If α = 0.05 or 5%, 1 – α = 0.95 or 95%

15 5-15 Properties of Point Estimators The properties of Xbar, compared to the sample median or mode, make it the preferred estimator of the population mean, μ X : Linearity Unbiasedness Minimum variance Efficiency Best Linear Unbiased Estimator (BLUE) Consistency

16 5-16 Properties of Point Estimators Linearity A linear estimator is a linear function of the sample observations Xbar = ∑(X i /n) = (1/n)(X 1 + X 2 +…X n ) The Xs appear with an index or power of 1 only Unbiasedness: E(Xbar) = μ X (Fig. D-3,5-3) In repeated applications of a method, if the mean value of an estimator equals the true parameter (population) value, the estimator is unbiased. With repeated sampling, the sample mean and sample median are unbiased estimators of the population mean.

17 5-17 Figure D-3 (5-3) Biased (X*) and unbiased (X) estimators of population mean value, μx.

18 5-18 Properties of Point Estimators Minimum Variance a minimum-variance estimator has smaller variance than any other estimator of a parameter In Fig. D-4 (5-4), the minimum-variance estimator of μ X is also biased Efficiency (Fig. D-5, 5-5) Among unbiased estimators, the one with the smallest variance is the best or efficient estimator

19 5-19 Figure D-4 (5-4) Distribution of three estimators of μx.

20 5-20 Figure D-5 (5-5) An example of an efficient estimator (sample mean).

21 5-21 Properties of Point Estimators Efficiency example Xbar ~ N(μ X, σ 2 /n) sample mean Xmed ~ N(μ X, (π/2)(σ 2 /n)) sample median (var Xmed)/(var Xbar) = π/2 ≈ 1.571 Xbar is a more precise estimator of μ X. Best Linear Unbiased Estimator (BLUE) An estimator that is linear, unbiased, and has the minimum variance among all linear and unbiased estimators of a parameter

22 5-22 Properties of Point Estimators Consistency (Fig. D-6, 5-6) A consistent estimator approaches the true value of the parameter as the sample size becomes large. Consider Xbar = ∑X i /n and X* = ∑X i /(n + 1) E(Xbar) = μ X but E(X*) = [n/(n + 1)] μ X. X* is biased. As n gets large, n/(n + 1) → 1, E(X*) → μ X. X* is a biased, but consistent estimator of μ X.

23 5-23 Figure D-6 (5-6) The property of consistency. The behavior of the estimator X* of population mean μx as the sample size increases.

24 5-24 Hypothesis Testing Suppose we hypothesize that the true mean P/E ratio for the NYSE is 18.5 Null hypothesis H 0 : μ X = 18.5 Alternative hypothesis H 1 H 1 : μ X > 18.5 one-sided or one-tailed H 1 : μ X < 18.5 one-sided or one-tailed H 1 : μ X ≠ 18.5 composite, two-sided or two-tailed Use the sample data (Table D-1 (5-1), average P/E = 23.25) to accept or reject H 0 and/or accept H 1

25 5-25 Confidence Interval Approach H 0 : μ X = 18.5, H 1 : μ X ≠ 18.5 (two-tailed) We know t = (Xbar - μ X )/(S X /√n) ~ t n-1. Use Table (E-2) A-2 to construct the 95% interval Critical t values (-2.052, 2.052) for 95% or 0.95 P(Xbar-2.052(S X /√n) < μ X < Xbar+2.052(S X /√n)) = 0.95 23.25 – 2.052(9.49/√28) < μ X < 23.25 + 2.052(9.49/√28) Or 19.57 < μ X < 26.93 H 0 : μ X = 18.5 < 19.57, outside the interval Reject H 0 with 95% confidence

26 5-26 Confidence Interval Approach Acceptance region 19.57 < H 0 :μ X < 26.93 interval for 95% Critical region or region of rejection H 0 :μ X < 19.57 and 26.93 < H 0 :μ X. Accept H 0 if value within acceptance region Reject H 0 if value outside the acceptance region Critical values are the dividing line between acceptance and rejection of H 0

27 5-27 Type I and Type II Errors We rejected H 0 : μ X = 18.5 at a 95% level of confidence, not 100% Type I Error: reject H 0 when it is true If we hypothesized H 0 : μ X = 21 above, we would not have rejected it with 95% confidence Type II Error: accept H 0 when it is false For any given sample size, one cannot minimize the probability of both types of error

28 5-28 Type I and Type II Errors Level of Significance, α Type I error = α = P(reject H 0 |H 0 is true) Power of the test, (1 – β) Type II error = β = P(accept H 0 |H 0 is false) Trade-off: min α vs. max (1 – β) In practice: set α fairly low (0.05 or 0.01) and don’t worry too much about (1 – β)

29 5-29 Example H 0 : μ X = 18.5 and α = 0.01 (99% confidence) Critical t values (-2.771, 2.771) with 27 d.f. 18.28 < μ X < 28.22 is 99% conf. interval Do not reject H 0 See Fig. D-2 (5-2) Decreasing α, P(Type I error), increases β, P(Type II error)

30 5-30 Test of Significance Approach For one-sided or one-tailed tests Recall t = (Xbar - μ X )/(S X /√n) We know Xbar, S X, and n; we hypothesize μ X. We can just calculate the value of t for our sample and μ X hypothesis Then look up its probability in Table E -2 (A-2). Compare that probability to the level of significance, α, you choose, to see if you reject H 0

31 5-31 Example P/E example Xbar = 23.25, S X = 9.49, n = 28 H 0 : μ X = 18.5, H 1 : μ X ≠ 18.5 t = (23.25 – 18.5)/(9.49/√28) = 2.6486 Set α = 0.05 in a two-tailed test (why?) Critical t values are (-2.052, 2.052) for 27 d.f. 2.65 is outside of the acceptance region Reject H 0 at 5% level of significance Reject null: test is statistically significant Do not reject: test is statistically insignificant The difference between observed (estimated) and hypothesized values of a parameter is or is not statistically significant.

32 5-32 One tail or Two? H 0 : μ X = 18.5, H 1 : μ X ≠ 18.5 two-tailed test H 0 : μ X 18.5 one-tailed test Testing procedure is exactly the same Choose α = 0.05 Critical t value = 1.703 for 27 d.f. 2.6 > 1.703, reject H 0 at 5% level of significance The test (statistic) is statistically significant See Fig. D-7 (5-7).

33 5-33 Figure D-7 (5-7) The t test of significance: (a) Two-tailed; (b) right-tailed; (c) left-tailed.

34 5-34 Table 5-2 A summary of the t test.

35 5-35 The Level of Significance and the p-Value Choice of α is arbitrary in classical approach 1%, 5%, 10% commonly used Calculate the p-value instead A.k.a.: exact significance level of the test statistic For P/E example with H 0 :μ X < 18.5, t ≈ 2.6 P(t 27 > 2.6) < 0.01, p-value < 0.01 or 1% Statistically significant at the 1% level In econometric studies, the p-values are commonly reported (or indicated) for all statistical tests

36 5-36 The χ 2 Test of Significance (n-1)(S 2 /σ 2 ) ~ χ 2 (n-1). We know n, S 2, and hypothesize σ 2 Calculate χ 2 value directly and test its significance Example: n = 31, S 2 = 12 H 0 : σ 2 = 9, H 1 : σ 2 ≠ 9, use α = 5% χ 2 (30) = 30(12/9) = 40 P(χ 2 (30) > 40) ≈ 10% > 5% = α Do not reject H 0 : σ 2 = 9

37 5-37 Table 5-3 A summary of the x 2 test.

38 5-38 F Test of Significance F = S X 2 /S Y 2 Or [(∑X-Xbar) 2 /(m-1)]/∑(Y-Ybar) 2 /(n-1)] follows the F distribution with (m-1, n-1) d.f. IF σ X 2 = σ Y 2, so H 0 : σ X 2 = σ Y 2. Example: SAT Scores in Ex. 4.15 var male = 46.1, var female = 83.88, n = 24 for both F = 83.88/46.1 ≈ 1.80 with 23, 23 d.f. Critical F value for 24 d.f. each at 1% is 2.66 1.8 < 2.66, not statistically significant, do not reject H 0

39 5-39 Table 5-4 A summary of the F statistic.


Download ppt "Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Statistical Inference: Estimation and Hypothesis Testing chapter."

Similar presentations


Ads by Google