Download presentation

Presentation is loading. Please wait.

Published byAmber Lawson Modified over 3 years ago

1
The t Test for Two Independent Samples

2
What Does a t Test for Independent Samples Mean? We will look at difference scores between two samples. We will look at difference scores between two samples. A research design that uses a separate sample for each treatment condition (or for each population) is called an independent- measures research design or a between- subjects design A research design that uses a separate sample for each treatment condition (or for each population) is called an independent- measures research design or a between- subjects design This is in contrast to repeated measures or within-subjects designs This is in contrast to repeated measures or within-subjects designs

3
What Do Our Hypotheses Look Like For These Tests? Null: Null: H 0 : μ 1 = μ 2 (No difference between the population means) H 0 : μ 1 = μ 2 (No difference between the population means) Same as μ 1 - μ 2 = 0 Same as μ 1 - μ 2 = 0 Alternative Alternative H 1 : μ 1 μ 2 (There is a mean difference) H 1 : μ 1 μ 2 (There is a mean difference) Same as μ 1 - μ 2 0 Same as μ 1 - μ 2 0

4
What is the Formula for Two Sample t – tests? It is actually very similar to the one sample test… It is actually very similar to the one sample test… t = [(M 1 – M 2 ) – (μ 1 - μ 2 )] / s (M1 – M2) t = [(M 1 – M 2 ) – (μ 1 - μ 2 )] / s (M1 – M2) This says that t is equal to the mean observed difference minus the mean expected difference all divided by the standard error This says that t is equal to the mean observed difference minus the mean expected difference all divided by the standard error This begs the question… This begs the question… What is the standard error for two samples? What is the standard error for two samples?

5
What Is the Standard Error for Two Samples? We know that M 1 approximates μ 1 with some error We know that M 1 approximates μ 1 with some error Also, M 2 approximates μ 2 with some error Also, M 2 approximates μ 2 with some error Therefore we have two sources of error Therefore we have two sources of error We pool this error with the following formula We pool this error with the following formula s (M1 – M2) = [(s 1 2 /n 1 ) + (s 2 2 /n 2 )] s (M1 – M2) = [(s 1 2 /n 1 ) + (s 2 2 /n 2 )]

6
But There Is a Problem… Does anyone know the problem with this standard error? Does anyone know the problem with this standard error? It only works for n 1 = n 2. It only works for n 1 = n 2. When this isnt the case we need to use pooled estimates of variance, otherwise we will have a biased statistic. When this isnt the case we need to use pooled estimates of variance, otherwise we will have a biased statistic. So what we have to do is pool the variance. So what we have to do is pool the variance. What does this mean? What does this mean?

7
What Is the Pooled Variance of Two Samples? To correct for the bias in the sample variances, the independent-measures t statistic will combine the two sample variances into a single value called the pooled variance. To correct for the bias in the sample variances, the independent-measures t statistic will combine the two sample variances into a single value called the pooled variance. The formula for pooled variance is: The formula for pooled variance is: s p 2 = (SS 1 + SS 2 ) / (df 1 + df 2 ) s p 2 = (SS 1 + SS 2 ) / (df 1 + df 2 ) This allows us to calculate an estimate of the standard error This allows us to calculate an estimate of the standard error

8
What Is Our New Estimate of Standard Error? For this we use the pooled variance in place of the sample variance For this we use the pooled variance in place of the sample variance s (M1 – M2) = [(s p 2 /n 1 ) + (s p 2 /n 2 )] s (M1 – M2) = [(s p 2 /n 1 ) + (s p 2 /n 2 )] What does the pooled standard error tell us? What does the pooled standard error tell us? It is a measure of the standard discrepancy between a sample statistics (M 1 – M 2 ) and the corresponding population parameter (μ 1 - μ 2 ) It is a measure of the standard discrepancy between a sample statistics (M 1 – M 2 ) and the corresponding population parameter (μ 1 - μ 2 ) Now all we need are the df. Now all we need are the df.

9
How Do We Calculate the df? We need to take into account both samples We need to take into account both samples df 1 = n 1 – 1 df 1 = n 1 – 1 df 2 = n 2 – 1 df 2 = n 2 – 1 Finally, the df tot = df 1 + df 2 Finally, the df tot = df 1 + df 2

10

11
An Example Group 1 Group 1 {19, 20, 24, 30, 31, 32, 30, 27, 22, 25} {19, 20, 24, 30, 31, 32, 30, 27, 22, 25} n 1 = 10 n 1 = 10 M 1 = 26 M 1 = 26 SS 1 = 200 SS 1 = 200 Group 2 Group 2 {23, 22, 15, 16, 18, 12, 16, 19, 14, 25} {23, 22, 15, 16, 18, 12, 16, 19, 14, 25} n 2 = 10 n 2 = 10 M 2 = 18 M 2 = 18 SS 2 = 160 SS 2 = 160

12
Step 1: State Your Hypotheses Null Null H 0 : μ 1 = μ 2 H 0 : μ 1 = μ 2 Alternative Alternative H 1 : μ 1 μ 2 H 1 : μ 1 μ 2 State your alpha State your alpha α =.05 α =.05

13
Step 2: Find t crit First find the df First find the df df tot = df 1 + df 2 = = 18 df tot = df 1 + df 2 = = 18 Find the two tailed critical t value for df = 18 and α =.05 Find the two tailed critical t value for df = 18 and α =.05 t crit = t crit = 2.101

14
Step 3: Sample Data and Test Statistics n 1 = 10 n 1 = 10 M 1 = 26 M 1 = 26 SS 1 = 200 SS 1 = 200 n 2 = 10 n 2 = 10 M 2 = 18 M 2 = 18 SS 2 = 160 SS 2 = 160 s p 2 = (SS 1 + SS 2 ) / (df 1 + df 2 ) = 20 s p 2 = (SS 1 + SS 2 ) / (df 1 + df 2 ) = 20 s (M1 – M2) = [(s p 2 /n 1 ) + (s p 2 /n 2 )] = 2 s (M1 – M2) = [(s p 2 /n 1 ) + (s p 2 /n 2 )] = 2 t obs = [(M 1 – M 2 ) – (μ 1 - μ 2 )] / s (M1 – M2) = 4 t obs = [(M 1 – M 2 ) – (μ 1 - μ 2 )] / s (M1 – M2) = 4

15
Step 4: Make a Decision Is our observed t (t obs ) greater than, or less than the critical value for t (t crit ) Is our observed t (t obs ) greater than, or less than the critical value for t (t crit ) 4 > > Therefore we make the decision Therefore we make the decision t(18) = 4.00, p<.05 t(18) = 4.00, p<.05

16
Effect Size d = (M 1 – M 2 ) / s p 2 d = (M 1 – M 2 ) / s p 2 r 2 = t 2 /(t 2 + df) r 2 = t 2 /(t 2 + df) r 2 = PRE = variability explained by treatment / total variability r 2 = PRE = variability explained by treatment / total variability

17

18
Confidence Intervals Point Estimate Point Estimate Interval Estimate Interval Estimate

19
Assumptions! There are always assumptions underlying statistical tests. There are always assumptions underlying statistical tests. We need to make sure to know these assumptions to make sure we dont violate them and get misleading results. We need to make sure to know these assumptions to make sure we dont violate them and get misleading results. So what are the t-test assumptions? So what are the t-test assumptions? 1. The observations within each sample must be independent. 2. The two populations from which the samples are selected must be normal. 3. The two populations from which the samples are selected must have equal variances.

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google