Chapter 9: Introduction to the t statistic

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Chapter 10: The t Test For Two Independent Samples
Inferential Statistics and t - tests
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Chapter 9 Introduction to the t-statistic
“Students” t-test.
Chapter 11: The t Test for Two Related Samples
1 Chapter 20: Statistical Tests for Ordinal Data.
Statistical Techniques I
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Ethan Cooper (Lead Tutor)
PSY 307 – Statistics for the Behavioral Sciences
Lecture 8 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Statistics Are Fun! Analysis of Variance
Chapter 3 Hypothesis Testing. Curriculum Object Specified the problem based the form of hypothesis Student can arrange for hypothesis step Analyze a problem.
Ch 15 - Chi-square Nonparametric Methods: Chi-Square Applications
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
T-Tests Lecture: Nov. 6, 2002.
Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Chapter 9 Hypothesis Testing.
1 Chapter 13: Introduction to Analysis of Variance.
Chapter 10 The t Test for Two Independent Samples PSY295 Spring 2003 Summerfelt.
PSY 307 – Statistics for the Behavioral Sciences
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Chapter 8 Introduction to Hypothesis Testing. Hypothesis Testing Hypothesis testing is a statistical procedure Allows researchers to use sample data to.
AM Recitation 2/10/11.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
Overview of Statistical Hypothesis Testing: The z-Test
Chapter 13 – 1 Chapter 12: Testing Hypotheses Overview Research and null hypotheses One and two-tailed tests Errors Testing the difference between two.
Overview Definition Hypothesis
Jeopardy Hypothesis Testing T-test Basics T for Indep. Samples Z-scores Probability $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
Chapter 8 Introduction to Hypothesis Testing
Aim: How do we test a comparison group? Exam Tomorrow.
1 Tests with two+ groups We have examined tests of means for a single group, and for a difference if we have a matched sample (as in husbands and wives)
Copyright © Cengage Learning. All rights reserved. 10 Inferences Involving Two Populations.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Chapter 8 Introduction to Hypothesis Testing
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 12: Introduction to Analysis of Variance
Hypothesis Testing Using the Two-Sample t-Test
H1H1 H1H1 HoHo Z = 0 Two Tailed test. Z score where 2.5% of the distribution lies in the tail: Z = Critical value for a two tailed test.
Chapter 19: The Binomial Test
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 1): Two-tail Tests & Confidence Intervals Fall, 2008.
Jeopardy Hypothesis Testing t-test Basics t for Indep. Samples Related Samples t— Didn’t cover— Skip for now Ancient History $100 $200$200 $300 $500 $400.
Chapter 9 Introduction to the t Statistic. 9.1 Review Hypothesis Testing with z-Scores Sample mean (M) estimates (& approximates) population mean (μ)
© aSup-2007 Inference about Means and Mean Different   1 PART III Inference about Means and Mean Different.
© Copyright McGraw-Hill 2000
Chapter 9: Testing Hypotheses Overview Research and null hypotheses One and two-tailed tests Type I and II Errors Testing the difference between two means.
Introduction to the t Test Part 1: One-sample t test
Chapter 10 The t Test for Two Independent Samples
© Copyright McGraw-Hill 2004
Ch8.2 Ch8.2 Population Mean Test Case I: A Normal Population With Known Null hypothesis: Test statistic value: Alternative Hypothesis Rejection Region.
T tests comparing two means t tests comparing two means.
Chapter 8: Introduction to Hypothesis Testing. Hypothesis Testing A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Introduction to the t statistic. Steps to calculate the denominator for the t-test 1. Calculate variance or SD s 2 = SS/n-1 2. Calculate the standard.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Inferences Concerning Means.
Copyright © 2009 Pearson Education, Inc t LEARNING GOAL Understand when it is appropriate to use the Student t distribution rather than the normal.
Chapter 10: The t Test For Two Independent Samples.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 9 Introduction to the t Statistic
Chapter 8 Hypothesis Testing with Two Samples.
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Test Review: Ch. 7-9
Hypothesis Testing.
Hypothesis Tests for Two Population Standard Deviations
What are their purposes? What kinds?
Chapter 18 The Binomial Test
1 Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing The general goal of a hypothesis test is to rule out chance (sampling error) as.
Presentation transcript:

Chapter 9: Introduction to the t statistic

The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown population mean. The particular advantage of the t statistic, is that the t statistic does not require any knowledge of the population standard deviation. Thus, the t statistic can be used to test hypotheses about a completely unknown population; that is, both μ and σ are unknown, and the only available information about the population comes from the sample.

The t Statistic (cont.) All that is required for a hypothesis test with t is a sample and a reasonable hypothesis about the population mean. There are two general situations where this type of hypothesis test is used:

The t Statistic (cont.) 1. The t statistic is used when a researcher wants to determine whether or not a treatment causes a change in a population mean. In this case you must know the value of μ for the original, untreated population. A sample is obtained from the population and the treatment is administered to the sample. If the resulting sample mean is significantly different from the original population mean, you can conclude that the treatment has a significant effect.

The t Statistic (cont.) 2. Occasionally a theory or other prediction will provide a hypothesized value for an unknown population mean. A sample is then obtained from the population and the t statistic is used to compare the actual sample mean with the hypothesized population mean. A significant difference indicates that the hypothesized value for μ should be rejected.

The Estimated Standard Error and the t Statistic Whenever a sample is obtained from a population you expect to find some discrepancy or "error" between the sample mean and the population mean. This general phenomenon is known as sampling error. The goal for a hypothesis test is to evaluate the significance of the observed discrepancy between a sample mean and the population mean.

The Estimated Standard Error and the t Statistic (cont.) The hypothesis test attempts to decide between the following two alternatives: 1. Is it reasonable that the discrepancy between M and μ is simply due to sampling error and not the result of a treatment effect? 2. Is the discrepancy between M and μ more than would be expected by sampling error alone? That is, is the sample mean significantly different from the population mean?

The Estimated Standard Error and the t Statistic (cont.) The critical first step for the t statistic hypothesis test is to calculate exactly how much difference between M and μ is reasonable to expect. However, because the population standard deviation is unknown, it is impossible to compute the standard error of M as we did with z-scores in Chapter 8. Therefore, the t statistic requires that you use the sample data to compute an estimated standard error of M.

The Estimated Standard Error and the t Statistic (cont.) This calculation defines standard error exactly as it was defined in Chapters 7 and 8, but now we must use the sample variance, s2, in place of the unknown population variance, σ2 (or use sample standard deviation, s, in place of the unknown population standard deviation, σ). The resulting formula for estimated standard error is s2 s sM = ── or sM = ── n n

The Estimated Standard Error and the t Statistic (cont.) The t statistic (like the z-score) forms a ratio. The top of the ratio contains the obtained difference between the sample mean and the hypothesized population mean. The bottom of the ratio is the standard error which measures how much difference is expected by chance. obtained difference M  μ t = ───────────── = ───── standard error sM

The Estimated Standard Error and the t Statistic (cont.) A large value for t (a large ratio) indicates that the obtained difference between the data and the hypothesis is greater than would be expected if the treatment has no effect.

The t Distributions and Degrees of Freedom You can think of the t statistic as an "estimated z-score." The estimation comes from the fact that we are using the sample variance to estimate the unknown population variance. With a large sample, the estimation is very good and the t statistic will be very similar to a z-score. With small samples, however, the t statistic will provide a relatively poor estimate of z.

Figure 9.1 Distributions of the t statistic for different values of degrees of freedom are compared to a normal z-score distribution. Like the normal distribution, t distributions are bell-shaped and symmetrical and have a mean of zero. However, t distributions have more variability, indicated by the flatter and more spread-out shape. The larger the value of df is, the more closely the t distribution approximates a normal distribution.

The t Distributions and Degrees of Freedom (cont.) The value of degrees of freedom, df = n - 1, is used to describe how well the t statistic represents a z-score. Also, the value of df will determine how well the distribution of t approximates a normal distribution. For large values of df, the t distribution will be nearly normal, but with small values for df, the t distribution will be flatter and more spread out than a normal distribution.

The t Distributions and Degrees of Freedom (cont.) To evaluate the t statistic from a hypothesis test, you must select an α level, find the value of df for the t statistic, and consult the t distribution table. If the obtained t statistic is larger than the critical value from the table, you can reject the null hypothesis. In this case, you have demonstrated that the obtained difference between the data and the hypothesis (numerator of the ratio) is significantly larger than the difference that would be expected if there was no treatment effect (the standard error in the denominator).

Hypothesis Tests with the t Statistic The hypothesis test with a t statistic follows the same four-step procedure that was used with z-score tests: 1. State the hypotheses and select a value for α. (Note: The null hypothesis always states a specific value for μ.) 2. Locate the critical region. (Note: You must find the value for df and use the t distribution table.) 3. Calculate the test statistic. 4. Make a decision (Either "reject" or "fail to reject" the null hypothesis).

Figure 9.3 The basic experimental situation for using the t statistic or the z-score is presented. It is assumed that the parameter μ is known for the population before treatment. The purpose of the experiment is to determine whether the treatment has an effect. Note that the population after treatment has unknown values for the mean and the variance. We will use a sample to test a hypothesis about the population mean.

Measuring Effect Size with the t Statistic Because the significance of a treatment effect is determined partially by the size of the effect and partially by the size of the sample, you cannot assume that a significant effect is also a large effect. Therefore, it is recommended that a measure of effect size be computed along with the hypothesis test.

Measuring Effect Size with the t Statistic (cont.) For the t test it is possible to compute an estimate of Cohen=s d just as we did for the z-score test in Chapter 8. The only change is that we now use the sample standard deviation instead of the population value (which is unknown). mean difference M  μ estimated Cohen=s d = ─────────── = ────── standard deviation s

Figure 9.6 Deviations from μ = 10 (no treatment effect) for the scores in Example 9.1. The colored lines in part (a) show the deviations for the original scores, including the treatment effect. In part (b) the colored lines show the deviations for the adjusted scores after the treatment effect has been removed.

Measuring Effect Size with the t Statistic (cont.) As before, Cohen=s d measures the size of the treatment effect in terms of the standard deviation. With a t test it is also possible to measure effect size by computing the percentage of variance accounted for by the treatment. This measure is based on the idea that the treatment causes the scores to change, which contributes to the observed variability in the data.

Measuring Effect Size with the t Statistic (cont.) By measuring the amount of variability that can be attributed to the treatment, we obtain a measure of the size of the treatment effect. For the t statistic hypothesis test, t2 percentage of variance accounted for = r2 = ───── t2 + df

Figure 9.6 Deviations from μ = 10 (no treatment effect) for the scores in Example 9.1. The colored lines in part (a) show the deviations for the original scores, including the treatment effect. In part (b) the colored lines show the deviations for the adjusted scores after the treatment effect has been removed.