Presentation is loading. Please wait.

Presentation is loading. Please wait.

MANOVA.

Similar presentations


Presentation on theme: "MANOVA."— Presentation transcript:

1 MANOVA

2 MANOVA Multivariate (multiple) analysis of variance (MANOVA) represents a blend of univariate analysis of variance principles and canonical correlation analysis. It is understood best against the backdrop of basic univariate analysis of variance (ANOVA). It is strongly related to discriminant function analysis MANOVA & DA are strongly related but conceptually distinct  Math is virtually identical -- but direction of prediction or understanding is switched

3 Categorical IV, Continuous DV
Single DV Multiple DVs Single IV 2 groups Hotelling’s T2 t test IV Single IV > 2 groups One-way ANOVA MANOVA Factorial ANOVA Multiple IVs MANOVA

4 MANOVA vs. Discriminant Analysis
Similar to ANOVA but deals with multiple dependent variables at the same time Can deal with multiple factors (e.g, A, B, AXB design) Hypothesis testing procedure Discriminant Analysis Uses multiple variables to identify group membership in categories Used for single categorical grouping variable Identifies dimensionality among groups

5 Assumptions of MANOVA DVs are multivariate normal
Robust against modest violation Lack of normality reflected in failing Box’s M test Population covariance matrices equal (homogeneity) Box’s M test Robust to modest violation if groups are of equal size Linear relationships Multicollinearity between DVs should not be too high, Observations independent (no correlated error) Sensitive to outliers

6 Violation of Assumptions
If you violate assumptions of homogeneity of covariance matrices you can: Discard outliers Discard groups Combine groups Drop a DV or combination of DVs Transform DVs

7 Why use MANOVA? Multiple DVs -- how to analyze this?
Problems with multiple ANOVAs Inflated Type I error rate (e.g., 5 DVs, a = .05, Type I error rate = 23%) Doesn’t take into account intercorrelation among DVs MANOVA is a simultaneous test of an ANOVA with multiple DVs Reduces Type I error rates Takes into account intercorrelation among DVs (optimal linear combinations of DVs) Nonsignificant results for many DVs may become significant when combined Multivariate DVs may be conceptually meaningful

8 ANOVA Review ANOVA Ho: m1 = m2 = ... = mk Tested by: SSbtwn / SSwithin
SStotal = SSbtwn + SSwithin

9 MANOVA Tested by: MAX B / Werror T = B + W

10 MANOVA Creates linear combinations of DVs that optimally discriminate among groups Goal: maximizes discrimination among groups Each linear combination is orthogonal Number of linear combinations extracted for each hypothesis test is equal to df for hypothesis or number of DVs, whichever is smaller (different numbers of linear combinations for different hypothesis tests)

11 Overall MV Significance Tests
Each MV test provides an approximate F test for a particular effect on all of the DVs taken together; Tests made of different combinations of matrices, but often yield same result Wilks’ Lambda (L) Depends on multiplication of lis (differences across various dimensions) Pillai’s Trace (V) Depends on summation of lis (differences across various dimensions) Most robust against violations of MV normality and homogeneity of covariance matrices More robust when sample size low or unequal cell sizes appear

12 Overall MV Significance Tests
Hotelling-Lawley Trace (T) Depends on summation of l (differences across various dimensions) Roy’s Greatest Root (q) Only focuses on first discriminant function (largest l) Works best when there’s only one underlying component or factor When these conditions are met, the most powerful statistic

13 MANOVA Interpretation
An overall MV significant effect suggests that the groups are significantly different on one or more linear combinations of the DVs Follow-up Tests Univariate ANOVAs performed only if MANOVA significant (protected univariate F test) Ignores intercorrelations Completely partialled F tests (residuals of the DVs) Bonferroni adjusted univariate ANOVAs performed to test specific Hs regardless of whether overall MANOVA significant

14 MANOVA Interpretation
Can use discriminant weights to interpret Like b weights in regression Susceptible to same problems as b weights (intercorrelation, cross-validation) Can use discriminant loadings to interpret results AKA structure coefficients or canonical variate correlations Reporting MANOVA Describe MV test statistic used Approximate F test and df Effect size

15 MANOVA Assume you have high performing employees that exhibit different trends of performance (improving, maintaining, declining) that are due to different causes (ability, effort, ease of job) You have four DVs Pay (change in pay) Promotion (likelihood to promote) Expect (expected future performance) Affect (your feelings toward the employee) Design is 3 (trend) by 3 (cause) ANOVA with four DVs

16 MANOVA Ability Effort Ease of Job Improving Maintaining Declining
CAUSE Ability Effort Ease of Job Improving Maintaining Declining TREND Four DVs: (1) Pay (change in pay); (2) Promotion (likelihood to promote); (3) Expect (expected future performance); (4) Affect (your feelings toward the employee)

17 MANOVA SPSS Commands /DESIGN
MANOVA pay promote expect affect BY inform(1 3) trend(1 3) /DISCRIM RAW STAN ESTIM CORR ROTATE(VARIMAX) ALPHA(1) /PRINT SIGNIF(MULT UNIV EIGN DIMENR) SIGNIF(EFSIZE) HOMOGENEITY(BARTLETT COCHRAN BOXM) /NOPRINT PARAM(ESTIM) /POWER T(.05) F(.05) /OMEANS TABLES( inform trend ) /PMEANS TABLES( inform trend ) /METHOD=UNIQUE /ERROR WITHIN+RESIDUAL /DESIGN

18 Results - Trend EFFECT .. TREND
Multivariate Tests of Significance (S = 2, M = 1/2, N = 372 1/2) Test Name Value Appr. F Hyp. DF Err DF Sig. of F Pillais Hotellings Wilks Roys Multivariate Effect Size and Observed Power at Level TEST NAME Effect Size Noncent Power Pillais Hotellings Wilks

19 Results - Trend Univariate F-tests with (2,750) D. F.
Variable Hyp. SS Err SS Hyp. MS Err MS F Sig. of F PAY PROMOTE EXPECT AFFECT EFFECT .. TREND (Cont.) Univariate F-tests with (2,750) D. F. (Cont.) Variable ETA Square Noncent Power PAY PROMOTE EXPECT AFFECT Meaning: If there is improvement, maintenance or decline in their performance influences the expectations for future performance.

20 Results - Trend VARIMAX rotated correlations between canonical and DVs Can. Var. DEP. VAR PAY PROMOTE EXPECT AFFECT These are the loadings. What are variate 1 and variate 2 comprised of?

21 Results

22 DF2 0,.20 X Maintaining X Declining 0,0 DF1 -.20,0 .20,0 X Improving DF1 = Expect,Promote DF2 = Affect ( Pay disappears due to its intercorrelations w/ other DVs) 0,-.20

23 Interaction Results * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * EFFECT .. CAUSE BY TREND Multivariate Tests of Significance (S = 4, M = -1/2, N = 372 1/2) Test Name Value Appr. F Hyp. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Multivariate Effect Size and Observed Power at Level TEST NAME Effect Size Noncent Power Pillais Hotellings Wilks

24 Interaction Results EFFECT .. INFORM BY TREND (Cont.)
Univariate F-tests with (4,750) D. F. Variable Hyp. SS Err SS Hyp. MS Err MS F Sig. of F PAY PROMOT EXPECT AFFECT Univariate F-tests with (4,750) D. F. (Cont.) Variable ETA Square Noncent Power PAY PROMOTE EXPECT AFFECT

25 Interaction Results VARIMAX rotated correlations between canonical and DEPENDENT variables Can. Var. DEP. VAR PAY PROMOTE EXPECT AFFECT

26 Results

27 Results

28 MANOVA Problems No guarantee that the linear combinations of DVs will make sense Rotation of discriminant function can help Significance tests on each of DVs can yield conflicting results when compared to the overall MV significance test Capitalization on chance Cross-validation is crucial Intercorrelation creates problems with discriminant weights and their interpretation Washing out effect (including many nonsig DVs with only a few signif DVs) Low power Power generally declines as number of DVs increases

29 Example One hundred students, preparing to take the Graduate Record Exam, were randomly assigned to one of four training conditions: Group 1: No special training Group 2: Standard “book and paper” training Group 3: Computer-based training Group 4: Standard and computer-based training

30 Example Cont’d… At the end of the study, all students complete a paper-and-pencil version of the Verbal and Quantitative scales of the GRE. All students also completed computer-administered parallel forms of the paper-and-pencil versions. The order of administration of the four outcome measures was counterbalanced.

31 Standard Quantitative Computer Verbal Computer Quantitative
Standard Verbal Standard Quantitative Computer Verbal Computer Quantitative No Training Standard Training Computer Training Standard and Computer Training

32 Each Variable Examined Separately
Univariate Analyses Each Variable Examined Separately SYNTAX manova stand_v, stand_q, comp_v, comp_q by group(1,4) /print = cellinfo(all) parameters signif(singledf) homogeneity error /power= exact /design .

33 Cell Means and Standard Deviations
Variable .. STAND_V Standard Measure of Verbal Ability FACTOR CODE Mean Std. Dev N GROUP No Train GROUP Standard GROUP Computer GROUP Both For entire sample Variable .. STAND_Q Standard Measure of Quantitative Ability GROUP No Train GROUP Standard GROUP Computer GROUP Both For entire sample Variable .. COMP_V Computer Measure of Verbal Ability GROUP No Train GROUP Standard GROUP Computer GROUP Both For entire sample Variable .. COMP_Q Computer Measure of Quantitative Ability GROUP No Train GROUP Standard GROUP Computer GROUP Both For entire sample

34

35 Univariate Homogeneity of Variance Tests
Variable .. STAND_V Standard Measure of Verbal Ability Cochrans C(24,4) = , P = (approx.) Bartlett-Box F(3,16589) = , P = .003 Variable .. STAND_Q Standard Measure of Quantitative Ability Cochrans C(24,4) = , P = (approx.) Bartlett-Box F(3,16589) = , P = .751 Variable .. COMP_V Computer Measure of Verbal Ability Cochrans C(24,4) = , P = (approx.) Bartlett-Box F(3,16589) = , P = .772 Variable .. COMP_Q Computer Measure of Quantitative Ability Cochrans C(24,4) = , P = (approx.) Bartlett-Box F(3,16589) = , P = .041 One assumption underlying ANOVA is homogeneity of variance. Cochran’s test is preferred over Bartlett’s test. No real problem here.

36 WITHIN CELLS Correlations with Std. Devs. on Diagonal
STAND_V STAND_Q COMP_V COMP_Q STAND_V STAND_Q COMP_V COMP_Q The multiple outcomes are highly related, especially the different abilities measured by the same method.

37 EFFECT .. GROUP (Cont.) Univariate F-tests with (3,96) D. F. Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F STAND_V STAND_Q COMP_V COMP_Q These omnibus F tests indicate that there are significant group differences for each of the dependent measures. They do not indicate where those differences exist, but there is little doubt that difference do exist.

38 EFFECT .. 1ST Parameter of GROUP (Cont.)
Univariate F-tests with (1,96) D. F. Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F STAND_V STAND_Q COMP_V COMP_Q By default, SPSS uses effects coding for the Groups variable, which when unique sums of squares are tested, is a test of each group against the grand mean (except for the last group). The first parameter is thus a test of Group 1 against the grand mean of all groups, for each outcome variable.

39 The second parameter is a test of Group 2 against the grand mean.
EFFECT .. 2ND Parameter of GROUP (Cont.) Univariate F-tests with (1,96) D. F. Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F STAND_V STAND_Q COMP_V COMP_Q The second parameter is a test of Group 2 against the grand mean.

40 EFFECT .. 3RD Parameter of GROUP (Cont.)
Univariate F-tests with (1,96) D. F. Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F STAND_V STAND_Q COMP_V COMP_Q The third parameter is a test of Group 3 against the grand mean. This parameter exhausts the 3 degrees of freedom for the Group effect.

41 Multivariate Analyses
Variables Treated as Linear Combinations that Maximize Group Separation

42 Multivariate analysis of variance can be thought of as addressing the question of whether any linear combination among dependent variables can produce a significant separation of groups. In this sense it is similar to canonical correlation analysis in that the linear combination of variables that produces the biggest difference between groups is formed, and if possible, subsequent linear combinations are formed that are independent of the first and that also produce the largest group separation possible.

43 The significance of these linear combinations can be gauged in several ways. Four common tests of significance represent generalizations of the univariate approach to significance testing. In the univariate model, an F test gauges the amount of between-groups variability to within-groups variability.

44 manova stand_v, stand_q, comp_v, comp_q by group(1,4)
/print = cellinfo(means) parameters signif(singledf multiv dimenr eigen univ hypoth) homogeneity error(cor sscp) transform /discrim = stan corr alpha(1) /power= exact /design . One multivariate approach to these data attempts to find the linear combinations of the four outcome variables that best separate the groups, with no structure imposed on the groups. This would be the most exploratory version.

45 This is an assumption underlying MANOVA.
Pooled within-cells Variance-Covariance matrix STAND_V STAND_Q COMP_V COMP_Q STAND_V STAND_Q COMP_V COMP_Q Multivariate test for Homogeneity of Dispersion matrices Boxs M = F WITH (30,25338) DF = , P = (Approx.) Chi-Square with 30 DF = , P = (Approx.) This is an assumption underlying MANOVA.

46 EFFECT .. GROUP Multivariate Tests of Significance (S = 3, M = 0, N = 45 1/2) Test Name Value Approx. F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys As in canonical correlation analysis, this overall test simply indicates whether there are any linear combinations of the outcome variables that can discriminate the groups significantly. It does not indicate how many linear combinations there are. The rationale for using this omnibus test as a Type I error protection approach is that included among the possible linear combinations are those in which each outcome variable is the only variable receiving a weight.

47 Eigenvalues and Canonical Correlations
Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. Dimension Reduction Analysis Roots Wilks L F Hypoth. DF Error DF Sig. of F 1 TO 2 TO 3 TO With four groups and four variables there are three possible linear combinations that could be made (limited by the degrees of freedom for groups). All three are providing significant and independent separation of the groups.

48 EFFECT .. GROUP (Cont.) Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q Weights Loadings The canonical variates and loadings are used in the same way here as they were in canonical correlation analysis. What are these linear combinations?

49

50

51

52

53 EFFECT .. 1ST Parameter of GROUP
Multivariate Tests of Significance (S = 1, M = 1 , N = 45 1/2) Test Name Value Exact F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Note.. F statistics are exact. The default group parameters are effects codes, indicating the extent to which groups are different from the grand mean. This more refined test indicates whether any linear combinations of the outcome variables can discriminate the first group from the grand mean.

54 Eigenvalues and Canonical Correlations
Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. Because this is inherently the comparison of two “groups”, there is only one way the discrimination can be made.

55 EFFECT .. 1ST Parameter of GROUP (Cont.)
Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q Just a single linear combination can be formed to make the discrimination.

56 EFFECT .. 2ND Parameter of GROUP
Multivariate Tests of Significance (S = 1, M = 1 , N = 45 1/2) Test Name Value Exact F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Note.. F statistics are exact. A similar test can be made for discriminating the second group from the grand mean.

57 Here too a single linear combination is possible.
Eigenvalues and Canonical Correlations Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. Here too a single linear combination is possible.

58 EFFECT .. 2ND Parameter of GROUP (Cont.)
Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q

59 EFFECT .. 3RD Parameter of GROUP
Multivariate Tests of Significance (S = 1, M = 1 , N = 45 1/2) Test Name Value Exact F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Note.. F statistics are exact. The last group parameter is a test of the third group against the grand mean. Significant discrimination is possible here too.

60 A single linear combination is possible.
Eigenvalues and Canonical Correlations Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. A single linear combination is possible.

61 EFFECT .. 3RD Parameter of GROUP (Cont.)
Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q

62 manova stand_v, stand_q, comp_v, comp_q by group(1,4)
/contrast(group)=special( ) /print = cellinfo(means) parameters signif(singledf multiv dimenr eigen univ hypoth) homogeneity error(cor sscp) transform /discrim = stan corr alpha(1) /power= exact /design . A potentially more revealing analysis would specify the 2 x 2 structure for the Groups variable. Then the linear combinations that are sought would be directed toward making those specified distinctions.

63 EFFECT .. GROUP Multivariate Tests of Significance (S = 3, M = 0, N = 45 1/2) Test Name Value Approx. F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys As with the univariate analyses, the omnibus test for the multivariate analysis does not change. It simply gauges if any discrimination is possible.

64 Eigenvalues and Canonical Correlations
Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. Dimension Reduction Analysis Roots Wilks L F Hypoth. DF Error DF Sig. of F 1 TO 2 TO 3 TO These are the same as well. They are the number of possible linear combinations that could be extracted.

65 EFFECT .. GROUP (Cont.) Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q

66 EFFECT .. 1ST Parameter of GROUP
Multivariate Tests of Significance (S = 1, M = 1 , N = 45 1/2) Test Name Value Exact F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Note.. F statistics are exact. Now the first parameter reflects the structure imposed on the Groups variable. This tests whether it is possible to form a linear combination of the outcome variables that separates the average of the computer-trained groups from the average of the groups that did not receive any computer training.

67 Eigenvalues and Canonical Correlations
Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. EFFECT .. 1ST Parameter of GROUP (Cont.) Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q

68 EFFECT .. 2ND Parameter of GROUP
Multivariate Tests of Significance (S = 1, M = 1 , N = 45 1/2) Test Name Value Exact F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Note.. F statistics are exact. This tests whether it is possible to form a linear combination that separates those who received standard training from those who did not receive standard training.

69 Eigenvalues and Canonical Correlations
Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. EFFECT .. 2ND Parameter of GROUP (Cont.) Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q

70 EFFECT .. 3RD Parameter of GROUP
Multivariate Tests of Significance (S = 1, M = 1 , N = 45 1/2) Test Name Value Exact F Hypoth. DF Error DF Sig. of F Pillais Hotellings Wilks Roys Note.. F statistics are exact. The remaining parameter is the interaction. It can be thought of as test of the No Training and Complete Training groups compared to the groups that received just one kind of training.

71 Eigenvalues and Canonical Correlations
Root No. Eigenvalue Pct. Cum. Pct. Canon Cor. EFFECT .. 3RD Parameter of GROUP (Cont.) Standardized discriminant function coefficients Function No. Variable STAND_V STAND_Q COMP_V COMP_Q * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Correlations between DEPENDENT and canonical variables Canonical Variable STAND_V STAND_Q COMP_V COMP_Q


Download ppt "MANOVA."

Similar presentations


Ads by Google