Lecture 43 Section 14.1 – 14.3 Mon, Nov 28, 2005

Slides:



Advertisements
Similar presentations
Hypothesis Testing. To define a statistical Test we 1.Choose a statistic (called the test statistic) 2.Divide the range of possible values for the test.
Advertisements

Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Hypothesis Testing IV Chi Square.
Testing Hypotheses about a Population Proportion Lecture 29 Sections 9.1 – 9.3 Tue, Oct 23, 2007.
Testing Distributions Section Starter Elite distance runners are thinner than the rest of us. Skinfold thickness, which indirectly measures.
Chapter 11: Applications of Chi-Square. Count or Frequency Data Many problems for which the data is categorized and the results shown by way of counts.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. 1.. Section 11-2 Goodness of Fit.
Testing Hypotheses about a Population Proportion Lecture 30 Sections 9.3 Wed, Oct 24, 2007.
Test of Goodness of Fit Lecture 43 Section 14.1 – 14.3 Fri, Apr 8, 2005.
Chapter Outline Goodness of Fit test Test of Independence.
Dan Piett STAT West Virginia University Lecture 12.
Testing Hypotheses about a Population Proportion Lecture 29 Sections 9.1 – 9.3 Fri, Nov 12, 2004.
Testing Hypotheses about a Population Proportion Lecture 29 Sections 9.1 – 9.3 Wed, Nov 1, 2006.
Section 12.2: Tests for Homogeneity and Independence in a Two-Way Table.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 11 Analyzing the Association Between Categorical Variables Section 11.2 Testing Categorical.
Testing Hypotheses about a Population Proportion Lecture 31 Sections 9.1 – 9.3 Wed, Mar 22, 2006.
Test of Homogeneity Lecture 45 Section 14.4 Tue, Apr 12, 2005.
Chapter 14 – 1 Chi-Square Chi-Square as a Statistical Test Statistical Independence Hypothesis Testing with Chi-Square The Assumptions Stating the Research.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 12 Tests of Goodness of Fit and Independence n Goodness of Fit Test: A Multinomial.
The Chi Square Equation Statistics in Biology. Background The chi square (χ 2 ) test is a statistical test to compare observed results with theoretical.
Confidence Interval Estimation for a Population Proportion Lecture 33 Section 9.4 Mon, Nov 7, 2005.
CHI SQUARE DISTRIBUTION. The Chi-Square (  2 ) Distribution The chi-square distribution is the probability distribution of the sum of several independent,
Chi Square Chi square is employed to test the difference between an actual sample and another hypothetical or previously established distribution such.
Student’s t Distribution Lecture 32 Section 10.2 Fri, Nov 10, 2006.
Test of Goodness of Fit Lecture 41 Section 14.1 – 14.3 Wed, Nov 14, 2007.
The Chi-square Statistic
Chapter 12 Chi-Square Tests and Nonparametric Tests
Student’s t Distribution
Student’s t Distribution
Chapter 11 Chi-Square Tests.
Testing Hypotheses about a Population Proportion
Chapter 13 Test for Goodness of Fit
Test for Goodness of Fit
Testing a Claim About a Mean:  Not Known
Chapter 12 Tests with Qualitative Data
Qualitative data – tests of association
Testing Goodness of Fit
The Analysis of Categorical Data and Chi-Square Procedures
Chi Square Two-way Tables
Lecture 18 Section 8.3 Objectives: Chi-squared distributions
AP Stats Check In Where we’ve been… Chapter 7…Chapter 8…
Chapter 10 Analyzing the Association Between Categorical Variables
Lecture 36 Section 14.1 – 14.3 Mon, Nov 27, 2006
Chi Square (2) Dr. Richard Jackson
Chapter 11 Chi-Square Tests.
The Analysis of Categorical Data and Goodness of Fit Tests
Lecture 3. The Multinomial Distribution
Hypothesis Tests for Two Population Standard Deviations
Lecture 41 Section 14.1 – 14.3 Wed, Nov 14, 2007
Lecture 42 Section 14.4 Wed, Apr 17, 2007
Lecture 37 Section 14.4 Wed, Nov 29, 2006
Lecture 38 Section 14.5 Mon, Dec 4, 2006
Analyzing the Association Between Categorical Variables
Lecture 43 Sections 14.4 – 14.5 Mon, Nov 26, 2007
Hypothesis Tests for a Standard Deviation
The Analysis of Categorical Data and Goodness of Fit Tests
Testing Hypotheses about a Population Proportion
The Analysis of Categorical Data and Goodness of Fit Tests
Copyright © Cengage Learning. All rights reserved.
The Analysis of Categorical Data and Goodness of Fit Tests
UNIT V CHISQUARE DISTRIBUTION
Testing Hypotheses about a Population Proportion
Testing Hypotheses about a Population Proportion
Chapter Outline Goodness of Fit test Test of Independence.
Chapter 11 Chi-Square Tests.
Statistical Inference for the Mean: t-test
Lecture 42 Section 14.3 Mon, Nov 19, 2007
Testing Hypotheses about a Population Proportion
Lecture 46 Section 14.5 Wed, Apr 13, 2005
MATH 2311 Section 8.5.
Presentation transcript:

Lecture 43 Section 14.1 – 14.3 Mon, Nov 28, 2005 Test of Goodness of Fit Lecture 43 Section 14.1 – 14.3 Mon, Nov 28, 2005

Count Data Count data – Data that counts the number of observations that fall into each of several categories. The data may be univariate or bivariate. Univariate example – Observe a student’s final grade: A – F. Bivariate example – Observe a student’s final grade and year in college: A – F and freshman – senior.

Univariate Example Observe students’ final grades in statistics: A, B, C, D, or F. A B C D F 5 12 8 4 2

Bivariate Example Observe students’ final grade in statistics and year in college. A B C D F Fresh 3 6 2 1 Soph 4 Junior Senior

Observed and Expected Counts Observed counts – The counts that were actually observed in the sample. Expected counts – The counts that would be expected if the null hypothesis were true.

The Chi-Square Statistic Denote the observed counts by O and the expected counts by E. Define the chi-square (2) statistic to be Clearly, if the observed counts are close to the expected counts, then 2 will be small. If even a few observed counts are far from the expected counts, then 2 will be large.

Think About It Think About It, p. 923.

Chi-Square Degrees of Freedom The chi-square distribution has an associated degrees of freedom, just like the t distribution. Each chi-square distribution has a slightly different shape, depending on the number of degrees of freedom.

Chi-Square Degrees of Freedom 2(2) 2(5) 2(10)

Properties of 2 The chi-square distribution with df degrees of freedom has the following properties. 2  0. It is unimodal. It is skewed right (not symmetric!) 2 = df. 2 = (2df). If df is large, then 2(df) is approximately N(df, (2df)).

Chi-Square vs. Normal N(30,60) 2(30) 2(32) N(32, 8)

Chi-Square vs. Normal N(128, 16) 2(128)

The Chi-Square Table See page A-11. The left column is degrees of freedom: 1, 2, 3, …, 15, 16, 18, 20, 24, 30, 40, 60, 120. The column headings represent areas of lower tails: 0.005, 0.01, 0.025, 0.05, 0.10, 0.90, 0.95, 0.975, 0.99, 0.995. Of course, the lower tails 0.90, 0.95, 0.975, 0.99, 0.995 are the same as the upper tails 0.10, 0.05, 0.025, 0.01, 0.005.

Example If df = 10, what value of 2 cuts off an lower tail of 0.05? If df = 10, what value of 2 cuts off a upper tail of 0.05?

TI-83 – Chi-Square Probabilities To find a chi-square probability on the TI-83, Press DISTR. Select 2cdf (item #7). Press ENTER. Enter the lower endpoint, the upper endpoint, and the degrees of freedom. The probability appears.

Example If df = 8, what is the probability that 2 will fall between 4 and 12? Compute 2cdf(4, 12, 8). If df = 32, what is the probability that 2 will fall between 24 and 40? Compute 2cdf(24, 40, 32). If df = 128, what is the probability that 2 will fall between 96 and 160? Compute 2cdf(96, 160, 128).

Tests of Goodness of Fit The goodness-of-fit test applies only to univariate data. The null hypothesis specifies a discrete distribution for the population. We want to determine whether a sample from that population supports this hypothesis.

Examples If we rolled a die 60 times, we expect 10 of each number. If we got frequencies 8, 10, 14, 12, 9, 7, does that indicate that the die is not fair? If we toss a fair coin, we should get two heads ¼ of the time, two tails ¼ of the time, and one of each ½ of the time. Suppose we toss a coin 100 times and get two heads 16 times, two tails 36 times, and one of each 48 times. Is the coin fair?

Examples If we selected 20 people from a group that was 60% male and 40% female, we would expect to get 12 males and 8 females. If we got 15 males and 5 females, would that indicate that our selection procedure was not random (i.e., discriminatory)?

Null Hypothesis The null hypothesis specifies the probability (or proportion) for each category. Each probability is the probability that a random observation would fall into that category.

Null Hypothesis To test a die for fairness, the null hypothesis would be H0: p1 = 1/6, p2 = 1/6, …, p6 = 1/6. The alternative hypothesis will always be a simple negation of H0: H1: At least one of the probabilities is not 1/6. or more simply, H1: H0 is false.

Expected Counts To find the expected counts, we apply the hypothetical probabilities to the sample size. For example, if the hypothetical probability is 1/6 and the sample size is 60, then the expected count is (1/6)  60 = 10.

Example We will use the sample data given for 60 rolls of a die to calculate the 2 statistic. Make a chart showing both the observed and expected counts (in parentheses). 1 2 3 4 5 6 8 (10) 10 14 12 9 7

Example Now calculate 2.

Computing the p-value The number of degrees of freedom is 1 less than the number of categories in the table. In this example, df = 5. To find the p-value, use the TI-83 to calculate the probability that 2(5) would be at least as large as 3.4. p-value = 2cdf(3.4, E99, 5) = 0.6386. Therefore, p-value = 0.6386 (accept H0).

The Effect of the Sample Size What if the previous sample distribution persisted in a much larger sample, say n = 6000? Would it be significant? 1 2 3 4 5 6 800 (1000) 1000 1400 1200 900 700

TI-83 – Goodness of Fit Test The TI-83 will not automatically do a goodness-of-fit test. The following procedure will compute 2. Enter the observed counts into list L1. Enter the expected counts into list L2. Evaluate the expression (L1 – L2)2/L2. Select LIST > MATH > sum and apply the sum function to the previous result, i.e., sum(Ans). The result is the value of 2.

Example To test whether the coin is fair, the null hypothesis would be H0: pHH = 1/4, pTT = 1/4, pHT = 1/2. The alternative hypothesis would be H1: H0 is false. Let  = 0.05.

Expected Counts To find the expected counts, we apply the hypothetical probabilities to the sample size. Expected HH = (1/4) 100 = 25. Expected TT = (1/4)  100 = 25. Expected HT = (1/2)  100 = 50.

Example We will use the sample data given for 60 rolls of a die to calculate the 2 statistic. Make a chart showing both the observed and expected counts (in parentheses). HH TT HT 16 (25) 36 48 (50)

Example Now calculate 2.

Compute the p-value In this example, df = 2. To find the p-value, use the TI-83 to calculate the probability that 2(2) would be at least as large as 8.16. 2cdf(8.16, E99, 2) = 0.0169. Therefore, p-value = 0.0169 (reject H0).