PSY 307 – Statistics for the Behavioral Sciences

Slides:



Advertisements
Similar presentations
PSY 307 – Statistics for the Behavioral Sciences Chapter 20 – Tests for Ranked Data, Choosing Statistical Tests.
Advertisements

Statistics for the Behavioral Sciences Second Edition Chapter 9: The Single-Sample t Test iClicker Questions Copyright © 2012 by Worth Publishers Susan.
Single Sample t-test Purpose: Compare a sample mean to a hypothesized population mean. Design: One group.
PSY 307 – Statistics for the Behavioral Sciences
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Chapter 14 Conducting & Reading Research Baumgartner et al Chapter 14 Inferential Data Analysis.
Inferential Stats for Two-Group Designs. Inferential Statistics Used to infer conclusions about the population based on data collected from sample Do.
BCOR 1020 Business Statistics
PSY 307 – Statistics for the Behavioral Sciences
Overview of Lecture Parametric Analysis is used for
BHS Methods in Behavioral Sciences I
T-Tests Lecture: Nov. 6, 2002.
S519: Evaluation of Information Systems
Chapter 11: Inference for Distributions
Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
The one sample t-test November 14, From Z to t… In a Z test, you compare your sample to a known population, with a known mean and standard deviation.
PSY 307 – Statistics for the Behavioral Sciences
1 (Student’s) T Distribution. 2 Z vs. T Many applications involve making conclusions about an unknown mean . Because a second unknown, , is present,
Hypothesis Testing Using The One-Sample t-Test
Chapter 9: Introduction to the t statistic
Confidence Interval A confidence interval (or interval estimate) is a range (or an interval) of values used to estimate the true value of a population.
Statistical Analysis. Purpose of Statistical Analysis Determines whether the results found in an experiment are meaningful. Answers the question: –Does.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
Aron, Aron, & Coups, Statistics for the Behavioral and Social Sciences: A Brief Course (3e), © 2005 Prentice Hall Chapter 8 Introduction to the t Test.
AM Recitation 2/10/11.
Hypothesis Testing:.
Statistical Analysis Statistical Analysis
Education 793 Class Notes T-tests 29 October 2003.
Sampling Distribution of the Mean Central Limit Theorem Given population with and the sampling distribution will have: A mean A variance Standard Error.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Statistical Inferences Based on Two Samples Chapter 9.
Single Sample Inferences
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 22 Using Inferential Statistics to Test Hypotheses.
1 Objective Compare of two matched-paired means using two samples from each population. Hypothesis Tests and Confidence Intervals of two dependent means.
Hypothesis Testing Using the Two-Sample t-Test
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 10. Hypothesis Testing II: Single-Sample Hypothesis Tests: Establishing the Representativeness.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Statistics - methodology for collecting, analyzing, interpreting and drawing conclusions from collected data Anastasia Kadina GM presentation 6/15/2015.
Essential Question:  How do scientists use statistical analyses to draw meaningful conclusions from experimental results?
© Copyright McGraw-Hill 2000
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
PSY 307 – Statistics for the Behavioral Sciences Chapter 9 – Sampling Distribution of the Mean.
Chapter Twelve The Two-Sample t-Test. Copyright © Houghton Mifflin Company. All rights reserved.Chapter is the mean of the first sample is the.
Chapter 8 Parameter Estimates and Hypothesis Testing.
Chapter 10 The t Test for Two Independent Samples
Data Analysis.
Copyright ©2011 Nelson Education Limited Inference from Small Samples CHAPTER 10.
Chapter 11 The t-Test for Two Related Samples
Chapter Eleven Performing the One-Sample t-Test and Testing Correlation.
Essential Statistics Chapter 171 Two-Sample Problems.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Chapter 7 Inference Concerning Populations (Numeric Responses)
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Data Analysis. Qualitative vs. Quantitative Data collection methods can be roughly divided into two groups. It is essential to understand the difference.
Class Seven Turn In: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 For Class Eight: Chapter 20: 18, 20, 24 Chapter 22: 34, 36 Read Chapters 23 &
Statistical principles: the normal distribution and methods of testing Or, “Explaining the arrangement of things”
Chapter 9 Introduction to the t Statistic
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Two-Sample Hypothesis Testing
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Correlation and Regression
Inferences on Two Samples Summary
Hypothesis Testing.
Basic Practice of Statistics - 3rd Edition Two-Sample Problems
Essential Statistics Two-Sample Problems - Two-sample t procedures -
What are their purposes? What kinds?
Introduction to the t Test
Inference for Distributions
Presentation transcript:

PSY 307 – Statistics for the Behavioral Sciences Chapter 13 – Single Sample t-Test Chapter 15 -- Dependent Sample t-Test

Midterm 2 Results Score Grade N 45-62 A 7 40-44 B 3 34-39 C 4 29-33 D 0-28 F The top score on the exam and for the curve was 50 – 2 people had it.

Student’s t-Test William Sealy Gossett published under the name “Student” but was a chemist and executive at Guiness Brewery until 1935.

What is the t Distribution? The t distribution is the shape of the sampling distribution when n < 30. The shape changes slightly depending on the number of subjects in the sample. The degrees of freedom (df) tell you which t distribution should be used to test your hypothesis: df = n - 1

Comparison to Normal Distribution Both are symmetrical, unimodal, and bell-shaped. When df are infinite, the t distribution is the normal distribution. When df are greater than 30, the t distribution closely approximates it. When df are less than 30, higher frequencies occur in the tails for t.

The Shape Varies with the df (k) Smaller df produce larger tails

Comparison of t Distribution and Normal Distribution for df=4

Finding Critical Values of t Use the t-table NOT the z-table. Calculate the degrees of freedom. Select the significance level (e.g., .05, .01). Look in the column corresponding to the df and the significance level. If t is greater than the critical value, then the result is significant (reject the null hypothesis).

Link to t-Tables http://www.statsoft.com/textbook/sttable.html

Calculating t The formula for t is the same as that for z except the standard deviation is estimated – not known. Sample standard deviation (s) is calculated using (n – 1) in the denominator, not n.

Confidence Intervals for t Use the same formula as for z but: Substitute the t value (from the t-table) in place of z. Substitute the estimated standard error of the mean in place of the calculated standard error of the mean. Mean ± (tconf)(sx) Get tconf from the t-table by selecting the df and confidence level

Assumptions Use t whenever the standard deviation is unknown. The t test assumes the underlying population is normal. The t test will produce valid results with non-normal underlying populations when sample size > 10.

Deciding between t and z Use z when the population is normal and s is known (e.g., given in the problem). Use t when the population is normal but s is unknown (use s in place of s). If the population is not normal, consider the sample size. Use either t or z if n > 30 (see above). If n < 30, not enough is known.

What are Degrees of Freedom? Degrees of freedom (df) are the number of values free to vary given some mathematical restriction. Example – if a set of numbers must add up to a specific toal, df are the number of values that can vary and still produce that total. In calculating s (std dev), one df is used up calculating the mean.

Example What number must X be to make the total 20? 5 100 10 200 7 300 5 100 10 200 7 300 X X 20 20 Free to vary Limited by the constraint that the sum of all the numbers must be 20 So there are 3 degrees of freedom in this example.

A More Accurate Estimate of s When calculating s for inferential statistics (but not descriptive), an adjustment is made. One degree of freedom is used up calculating the mean in the numerator. One degree of freedom must also be subtracted in the denominator to accurately describe variability.

Within Subjects Designs Two t-tests, depending on design: t-test for independent groups is for Between Subjects designs. t-test for paired samples is for Within Subjects designs. Dependent samples are also called: Paired samples Repeated measures Matched samples

Examples of Paired Samples Within subject designs Pre-test/post-test Matched-pairs

Independent samples – separate groups

Dependent Samples Each observation in one sample is paired one-to-one with a single observation in the other sample. Difference score (D) – the difference between each pair of scores in the two paired samples. Hypotheses: H0: mD = 0 mD ≤ 0 H1: mD ≠ 0 mD > 0

Repeated Measures A special kind of matching where the same subject is measured more than once. This kind of matching reduces variability due to individual differences.

Calculating t for Matched Samples Except that D is used in place of X, the formula for calculating the t statistic is the same. The standard error of the sampling distribution of D is used in the formula for t.

Degrees of Freedom Subtracting values for two groups gives a single difference score. The differences, not the original values, are used in the t calculation, so degrees of freedom = n-1. Because observations are paired, the number of subjects in each group is the same.

Confidence Interval for mD Substitute mean of D for mean of X. Use the tconf value that corresponds to the degrees of freedom (n-1) and the desired a level (e.g., 95%= .05 two tailed). Use the standard deviation for the difference scores, sD. Mean D ± (tconf)(sD)

When to Match Samples Matching reduces degrees of freedom – the df are for the pair, not for individual subjects. Matching may reduce generality of the conclusion by restricting results to the matching criterion. Matching is appropriate only when an uncontrolled variable has a big impact on results.

Deciding Which t-Test to Use How many samples are there? Just one group -- treat as a population. One sample plus a population is not two samples. If there are two samples, are the observations paired? Do the same subjects appear in both conditions (same people tested twice)? Are pairs of subjects matched (twins)?

Population Correlation Coefficient Two correlated variables are similar to a matched sample because in both cases, observations are paired. A population correlation coefficient (r) would represent the mean of r’s for all possible pairs of samples. Hypotheses: H0: r = 0 H1: r ≠ 0

t-Test for Rho (r) Similar to a t–test for a single group. Tests whether the value of r is significantly different than what might occur by chance. Do the two variables vary together by accident or due to an underlying relationship?

Formula for t Standard error of prediction

Calculating t for Correlated Variables Except that r is used in place of X, the formula for calculating the t statistic is the same. The standard error of prediction is used in the denominator to calculate the standard deviation. Compare against the critical value for t with df = n – 2 (n = pairs).

Importance of Sample Size Lower values of r become significant with greater sample sizes: As n increases, the critical value of t decreases, so it is easier to obtain a significant result. Cohen’s rule of thumb .10 = weak relationship .30 = moderate relationship .50 = strong relationship