Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D

Similar presentations


Presentation on theme: "Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D"— Presentation transcript:

1 Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Chicago School of Professional Psychology Lecture 7 Kin Ching Kong, Ph.D

2 Agenda The t Statistic Hypothesis Testing with t
The problem with z-score as test statistic Estimated standard error (sM) The t formula Degrees of freedom The shape of the t distribution The t-distribution table Hypothesis Testing with t Measuring Effect Size for the t Statistic

3 Intro. to the t Statistic
z = M – m = obtained difference between data & hypothesis sM standard distance expected by chance Problem of using the z scores for hypothesis testing: Usually we don’t know the population standard deviation, sM Solution: estimate population variance with sample variance sample variance is an unbiased estimate of population variance

4 The Estimated Standard Error (sM)
Sample Variance & Standard Deviation: s2 = SS = SS s = = n – df Standard Error: sM = s or sM = Estimated Standard Error: sM = s or sM = The estimated standard error is used when s is unknown & provides an estimate of the standard distance between M and m

5 The t Statistic The t Statistic: t = M – m = M – m sM
The t statistic is used to test hypotheses about an unknown population m when s is unknown. t = sample mean – population mean estimated standard error = obtained difference difference expected by chance.

6 Degrees of Freedom & t How well t approximate z depends on the df.
Degree of Freedom (df): the number of scores in a sample that are free to vary. df = n-1 The greater the df, the better s2 represent s2, and the better the t statistic approximate the z-score

7 The t Distribution The exact shape of a t distribution changes with df. The larger the df, the more closely the t distribution approximates a normal distribution. Unlike the z distribution which is the same for any sample size, and always has a mean of 0 and a standard deviation of 1, t distributions are actually a family of distributions, each has a different standard deviation. Distributions of t are bell-shaped and symmetrical and have a mean of 0, but has more variability than a normal z distribution. Figure 9.1 of your book The t distribution is flatter and more variable than the z distribution because the standard error used in the z formula is a constant while the estimated standard error used in the t formula is a variable. So sample with the same M will have the same z scores, but not the same t.

8 The t Distribution Table
The t Distribution Table (Table B.2) Table 9.1 of your book The top two rows show the proportions (or probabilities in one or two tails. The first column list the df. The numbers in the body of the table are the values of t that separates the tail(s) from the body of the distribution. Examples: df = 3, find the t value that separates the top 5% of the distribution. Figure 9.2 n = 6, find the t values that separates the extreme 5% Ans to 2: df = 5, t =

9 Hypothesis Testing with The t Statistic
Example 9.1 of your book: n = 9 insectivorous birds are tested in a box that has two separate chambers, one with two large eye-spots painted. The birds are placed in the box for 60 minutes and are free to go from one chamber to the other. The time in the plain chamber is recorded. Figure 9.4 M = 36 minutes, SS = 72 were obtained. Did the eye-spots have an effect on behavior? Use a = .05.

10 Hypothesis Testing with The t Statistic (Cont.)
Step 1: State the Hypotheses H0: m plain side = 30 minutes (no preference for either side) H1: m plain side = 30 minutes (preference for either side) Step 2: Define the Critical Region For a = .05, df = n-1 = 9-1 = 8 tcritical = Figure 9.5 of your book Step 3: Compute Test Statistic t = M – m sM = s2 = SS/df = 72/8 = 9 sM = =1 t = (36 – 30)/1 = 6.00 Step 4: Make a Decision Since 6.00 > 2.306, that is, the sample mean is in the critical region, we reject H0 and conclude that the eye-spots pattern appears to have an effect on the birds’ behavior.

11 Your Turn, Exercise 1 A sample, n = 25, is randomly selected from a population with m = 50, and a treatment is administered to the individuals in the sample. After treatment, the sample is found to have a M = 54 with a standard deviation of s = 6. Using a two-tailed test, is the result significant at the .05 level? Answer

12 Effect Size with the t Statistic (Cohen’s d)
Significance vs Effect Size: t test tells you whether there is an effect, a difference that is significantly greater than chance (i.e. standard error), but it doesn’t tell you how big the effect or difference. Cohen’s d: Measures effect size in units of standard deviation Cohen’s d = mean difference standard deviation for t tests: Cohen’s d = mean difference sample standard deviation e.g. In the previous example, you found that the treatment had a significant effect, but you don’t know how much. M = 54, m = 50, s = Cohen’s d = (54 – 50)/6 = 0.67 so the effect size is 0.67 standard deviation.

13 Effect Size with the t Statistic (r2)
Another way to measure effect size is to measure the amount of variability in the scores that is due to, (explained by, accounted for by) the treatment. Logic: the treatment caused the scores to decrease or increase, thus changing the variance. We can measure the treatment effect by figuring out how much of the variability in the scores is accounted for by the treatment.

14 Effect Size with the t Statistic (r2) Demo.
Example 9.1: null hypothesis: treatment (eye-spot) has no effect on the bird’s behavior. H0: m = 30 M= 36 Figure 9.6 :Frequency distribution for the sample. (the scores are differ from each other and different from m of 30. Part of the difference are due to treatment, part due to individual difference, i.e. error) SStotal = SStreatment + SSerror SSerror = SS with treatment effect removed Figure 9.7 Table 9.2 SStreatment = SStotal – SSerror = 396 – 72 = 324 % Variability account for by Tx = variability accounted for by Tx total variability = SStreatment = = (81.82%) SStotal

15 Effect Size with the t Statistic (r2 continue)
r2: percentage of variance explained by the treatment effect. r2 = t 2 t 2 + df Interpreting r2: Small effect: < r2 < 0.09 Medium effect: < r2 < 0.25 Large effect: r2 > 0.25

16 Your turn: Exercise 2 A population with m = 90. A sample is selected and treatment is administered. After treatment, M = 92, and sample variance, s2 = 25. With a two-tail test and alpha level set to .05 a) if n=25, is the 2-point effect significant? What is the effect size as measured by Cohen’s d and r2? b) if n = 100, is the 2-point effect significant? What is the effect size as measured by Cohen’s d and r2? c) is the t test affected by n? What about Cohen’s d and r2 ? Answer a Answer b Answer c


Download ppt "Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D"

Similar presentations


Ads by Google