Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis of Differences Between Two Groups Between Multiple Groups Independent Groups Dependent Groups Independent Groups Dependent Groups Independent.

Similar presentations


Presentation on theme: "Analysis of Differences Between Two Groups Between Multiple Groups Independent Groups Dependent Groups Independent Groups Dependent Groups Independent."— Presentation transcript:

1 Analysis of Differences Between Two Groups Between Multiple Groups Independent Groups Dependent Groups Independent Groups Dependent Groups Independent Samples t-test Repeated Measures t-test Independent Samples ANOVA Repeated Measures ANOVA Frequency CHI Square Nominal / Ordinal Data Some kinds of Regression Correlation: Pearson Regression Analysis of Relationships Multiple Predictors Correlation: Spearman Multiple Regression One Predictor Concept Map For Statistics as taught in IS271 (a work in progress) Rashmi Sinha Interval Data Type of Data Ordinal Regression

2 Analysis of Variance or F test ANOVA is a technique for using differences between sample means to draw inferences about the presence or absence of differences between populations means. The logic Calculations in SPSS Magnitude of effect: eta squared, omega squared

3 Assumptions of ANOVA Assume:Assume: XObservations normally distributed within each population XPopulation variances are equal Homogeneity of variance or homoscedasticityHomogeneity of variance or homoscedasticity XObservations are independent

4 Assumptions--cont. Analysis of variance is generally robust to first twoAnalysis of variance is generally robust to first two XA robust test is one that is not greatly affected by violations of assumptions.

5 Logic of Analysis of Variance Null hypothesis (H o ): Population means from different conditions are equalNull hypothesis (H o ): Population means from different conditions are equal Xm 1 = m 2 = m 3 = m 4 Alternative hypothesis: H 1Alternative hypothesis: H 1 XNot all population means equal.

6 Lets visualize total amount of variance in an experiment Between Group Differences (Mean Square Group) Error Variance (Individual Differences + Random Variance) Mean Square Error Total Variance = Mean Square Total F ratio is a proportion of the MS group/MS Error. The larger the group differences, the bigger the F The larger the error variance, the smaller the F

7 Logic--cont. Create a measure of variability among group meansCreate a measure of variability among group means XMS group Create a measure of variability within groupsCreate a measure of variability within groups XMS error

8 Logic--cont. Form ratio of MS group /MS errorForm ratio of MS group /MS error XRatio approximately 1 if null true XRatio significantly larger than 1 if null false X“approximately 1” can actually be as high as 2 or 3, but not much higher

9 Grand mean = 3.78

10 Calculations Start with Sum of Squares (SS)Start with Sum of Squares (SS) XWe need: SS totalSS total SS groupsSS groups SS errorSS error Compute degrees of freedom (df )Compute degrees of freedom (df ) Compute mean squares and FCompute mean squares and F Cont.

11 Calculations--cont.

12 Degrees of Freedom (df ) Number of “observations” free to varyNumber of “observations” free to vary Xdf total = N - 1 N observationsN observations Xdf groups = g - 1 g meansg means Xdf error = g (n - 1) n observations in each group = n - 1 dfn observations in each group = n - 1 df times g groupstimes g groups

13 Summary Table

14 When there are more than two groups Significant F only shows that not all groups are equalSignificant F only shows that not all groups are equal XWe want to know what groups are different. Such procedures are designed to control familywise error rate.Such procedures are designed to control familywise error rate. XFamilywise error rate defined XContrast with per comparison error rate

15 Multiple Comparisons The more tests we run the more likely we are to make Type I error.The more tests we run the more likely we are to make Type I error. XGood reason to hold down number of tests

16 Bonferroni t Test Run t tests between pairs of groups, as usualRun t tests between pairs of groups, as usual XHold down number of t tests XReject if t exceeds critical value in Bonferroni table Works by using a more strict level of significance for each comparisonWorks by using a more strict level of significance for each comparison Cont.

17 Bonferroni t--cont. Critical value of a for each test set at.05/c, where c = number of tests runCritical value of a for each test set at.05/c, where c = number of tests run XAssuming familywise a =.05 Xe. g. with 3 tests, each t must be significant at.05/3 =.0167 level. With computer printout, just make sure calculated probability <.05/cWith computer printout, just make sure calculated probability <.05/c Necessary table is in the bookNecessary table is in the book

18 Magnitude of Effect Why you need to compute magnitude of effect indicesWhy you need to compute magnitude of effect indices Eta squared (h 2 )Eta squared (h 2 ) XEasy to calculate XSomewhat biased on the high side XFormula See slide #33See slide #33 XPercent of variation in the data that can be attributed to treatment differences Cont.

19 Magnitude of Effect--cont. Omega squared (w 2 )Omega squared (w 2 ) XMuch less biased than h 2 XNot as intuitive XWe adjust both numerator and denominator with MS error XFormula on next slide

20 h 2 and w 2 for Foa, et al. h 2 =.18: 18% of variability in symptoms can be accounted for by treatment h 2 =.18: 18% of variability in symptoms can be accounted for by treatment w 2 =.12: This is a less biased estimate, and note that it is 33% smaller. w 2 =.12: This is a less biased estimate, and note that it is 33% smaller.


Download ppt "Analysis of Differences Between Two Groups Between Multiple Groups Independent Groups Dependent Groups Independent Groups Dependent Groups Independent."

Similar presentations


Ads by Google