Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 My contact details Colin Gray Room S2 (Thursday mornings, especially) address: Telephone: (27) 2234 Dont hesitate.

Similar presentations

Presentation on theme: "1 My contact details Colin Gray Room S2 (Thursday mornings, especially) address: Telephone: (27) 2234 Dont hesitate."— Presentation transcript:

1 1 My contact details Colin Gray Room S2 (Thursday mornings, especially) address: Telephone: (27) 2234 Dont hesitate to drop in or me.

2 2 This afternoons programme 2.05 – 3.00 A short talk – 3.20 A break for coffee – 4.30 Running an analysis with PASW Statistics 17. (PASW stands for Predictive Analytics Software. This is the new name for SPSS Statistics 17 since IBMs takeover.)

3 3 SESSION 1 The one-way Analysis of Variance (ANOVA)

4 4 The simplest ANOVA design is known as the COMPLETELY RANDOMISED or ONE-FACTOR, BETWEEN SUBJECTS experiment.

5 5 A one-factor, between subjects experiment There are five GROUPS of participants or SUBJECTS. Each participant is RANDOMLY assigned to ONE and ONLY ONE of several different groups or conditions. This type of experiment is said to be of COMPLETELY RANDOMISED, ONE-FACTOR BETWEEN SUBJECTS or ONE- WAY design. An experiment of this design produces INDEPENDENT SAMPLES of scores.

6 6 The one-way ANOVA The ANOVA of data from a one-factor BETWEEN SUBJECTS experiment is known as the ONE-WAY ANOVA. The one-way ANOVA must be sharply distinguished from the one-factor WITHIN SUBJECTS (or REPEATED MEASURES) ANOVA, which is appropriate when each participant is tested under every condition. That experimental design produces RELATED SAMPLES of scores. The between subjects and within subjects ANOVAs are predicated upon different statistical interpretations, or MODELS of the data.

7 7 Results of the experiment (the RAW DATA) Each column contains the scores of the ten participants who were tested under one particular condition.

8 8 Statistics of the results

9 9 Populations versus samples We have some data on skilled performance. We are actually interested in the performance of people IN GENERAL under these conditions. The POPULATION is the reference set of all possible observations. Our research question is about a population. Our data are merely a subset or SAMPLE from the population.

10 10 Sampling error Draw a large number of samples of fixed size from a population and calculate the mean M and standard deviation SD for each sample. The values of M and SD will vary from sample to sample. Sampling implies SAMPLING VARIABILITY. If we take the values of the sample statistics for characteristics of the population, we shall be in error. So sampling variability is usually termed SAMPLING ERROR.

11 11 Parameters versus statistics STATISTICS are characteristics of samples. PARAMETERS are characteristics of populations. We use Greek letters to denote parameters; we use Roman letters to denote statistics.

12 12 The null hypothesis The null hypothesis states that, in the population, all five means have the same value. In other words, the null hypothesis states that none of the drugs has any effect. Its as if everyone were performing under the Placebo condition.

13 13 Sampling error? The values of the means in our data set vary considerably, casting doubt upon the null hypothesis. On the other hand, we should expect sampling error to result in differences among the five sample means. Are the obtained differences so large as to amount to evidence against H 0 ?

14 14 Deviation scores A deviation score is a score from which the mean has been subtracted. Deviation scores have the very important property that they sum to zero.

15 15 Deviations sum to zero

16 16 Breakdown of the total deviation j is the number of the group to which a participant was assigned: j = 1, 2, 3, 4, 5, corresponding to the Placebo, Drug A, Drug B, Drug C and Drug D groups, respectively. So M 1 is the Placebo mean, M 2 is the mean for the Drug A group, M 3 is the mean for the Drug B group, …, M 5 is the mean for the Drug D group.

17 17 The total sum of squares The total sum of squares is the sum of the squares of all the total deviation scores. It is a measure of the TOTAL VARIABILITY of the scores.

18 18 Summing and squaring Starting with the breakdown of the total deviation and summing and squaring over all scores, we have …

19 19 Product terms? Where has the product term gone? It has disappeared because deviations about the mean sum to zero.

20 20 Breakdown (or partition) of the total sum of squares

21 21 Interpretation of the partition The between groups deviation (and sum of squares) partly reflects any differences that there may be among the population means. It also reflects random influences such as sampling error and other contributors to the noisiness of the data. These influences are collectively known as ERROR. The within groups deviation and within groups sum of squares reflect only error.

22 22 How the one-way ANOVA works The variance BETWEEN the treatment means is compared with the average variance of scores around their means WITHIN the treatment groups. The comparison is made with a statistic called F.

23 23 Mean squares In the ANOVA, a variance estimate is known as a MEAN SQUARE, which is expressed as a SUM OF SQUARES (SS), divided by a parameter known as the DEGREES OF FREEDOM (df).

24 24 A mean square

25 25 Degrees of freedom The term DEGREES OF FREEDOM is borrowed from physics. The degrees of freedom of a system is the number of constraints necessary to determine its state completely. Deviations of n values about their mean sum to zero. So if you know (n – 1) deviations, you know the n th deviation. The sum of squares of the n deviations has only (n – 1) degrees of freedom.

26 26 What the F statistic is measuring The building-block for MS between is the between groups deviation of the group mean from the grand mean. The building-block for MS within is the within groups deviation of a score from its group mean.

27 27 Repeated sampling Suppose the null hypothesis is true. Imagine the experiment were to be repeated thousands and thousands of times, with fresh samples of participants each time. There would be thousands and thousands of data sets, from each of which a value of F could be calculated. The values of F would vary considerably. The distribution of F with repeated sampling is known as its SAMPLING DISTRIBUTION.

28 28 Expected value The expected value of a statistic such as F is its long run mean value with repeated sampling. The expected value of F, written as E(F), is thus the mean of the sampling distribution of the statistic. If the null hypothesis is true and there are no differences among the population means, the expected value of F is close to (though not exactly) unity.

29 29 Summary of the one-way ANOVA

30 30 Calculating MS within In the equal-n case, we can simply take the mean of the cell variance estimates. MS within = ( )/5 = 48.36/5 = 9.67

31 31 Degrees of freedom of the within groups mean square

32 32 The between groups sum of squares revisited There are 50 terms in the summation. But M j – M has only FIVE different values. So all ten subjects in any group get the same value for M j – M.

33 33 Finding SS between

34 34 Degrees of freedom of the between groups mean square SS between is calculated from only 5 different values. But deviations about the mean sum to zero, so if you know FOUR deviations about the grand mean, you know the fifth. The between groups mean square has FOUR degrees of freedom.

35 35 Finding MS between

36 36 The value of F The value of F is clearly much greater than the expected value of close to 1 under the null hypothesis.

37 37 Range of variation of F The F statistic is the ratio of two sample variances. A variance can take only non-negative values. So the lower limit for F is zero. There is, however, no upper limit for F.

38 38 Specifying the sampling distribution To test the null hypothesis, you must be able to locate the value of F you obtained from your data in its theoretical sampling distribution To specify the correct distribution of F (or any other test statistic), you must assign values to properties known as PARAMETERS.

39 39 Parameters of F Recall that the t distribution has ONE parameter: the DEGREES OF FREEDOM (df ). The F distribution has TWO parameters: the degrees of freedom of the between groups and within groups mean squares, which we shall denote by df between and df within, respectively.

40 40 The correct F distribution We shall specify an F distribution with the notation F (df between, df within ). We have seen that in our example, df between = 4 and df within = 45. The correct F distribution for our test of the null hypothesis is therefore F(4, 45). This specifies a whole POPULATION of values of F with which we can compare our own value.

41 41 The distribution of F(4, 45)

42 42 Expected value of F under the null hypothesis

43 43 The critical region If the value of F is greater than the 95 th percentile, reject the null hypothesis.

44 44 The p-value of F The p-value is the probability of a value at least as extreme as the one obtained – the grey area. If the p-value is less than.05, youre in the critical region and the null hypothesis is rejected.

45 45 The ANOVA summary table The results of the ANOVA are displayed in the ANOVA summary table. The table contains: 1.a list of sources of variance. 2.the degrees of freedom of each source. 3.its sum of squares. 4.its mean square. 5.the value of F. 6.the p-value of F.

46 46 The ANOVA summary table

47 47 Reporting the result There is a correct format for reporting the results of a statistical test. This is described in the APA publications manual. Never report a p-value as.000 – thats not acceptable in a scientific report. Write, With the alpha-level set at.05, F is significant: F(4, 45) = 9.09; p <.01. Some prefer to give p-values to three places of decimals. You are now expected to include a measure of EFFECT SIZE as well. More on this later.

48 48 Lisa DeBruines guidelines Lisa DeBruine has compiled a very useful document describing the most important of the APA guidelines for the reporting of the results of statistical tests. I strongly recommend this document, which is readily available on the Web. Meth_A/ Meth_A/ Sometimes the APA manual is unclear. In such cases, Lisa has opted for what seems to be the most reasonable interpretation. If you follow Lisas guidelines, your submitted paper wont draw fire on account of poor presentation of your statistics!

49 49 A two-group, between subjects experiment

50 50 ANOVA or t test? We can compare the two means by using an independent-samples t test. But what would happen if, instead of making a t test, we were to run an ANOVA to test the null hypothesis of equality of the means?

51 51 Same result! Observe that F = t 2. Observe, however, that the p-value is the same for both tests. The ANOVA and the independent-samples t test are EXACTLY EQUIVALENT and result in the same decision about the null hypothesis.

52 52 Implications of a significant F Our F test has shown a significant effect of the Drug factor. What can we conclude? We can say that the null hypothesis is false. That means that, in the population, the means do not all have the same value. But it does not tell us WHICH differences among our group means are significant.

53 53 Making comparisons among the five treatment means

54 54 Planned comparisons Suppose that, before you ran your experiment, you had planned to make a set of specified comparisons among the treatment means. For example, you might have planned to compare the mean of the Placebo group with the mean for each of the drug conditions. I am going to describe how such comparisons are made. I am also going to compare the properties of different comparison sets.

55 55 Simple and complex comparisons A comparison between any two of the array of 5 means is known as a SIMPLE comparison. Comparisons between the Placebo mean and each of the others are simple comparisons. But you might want to compare, not single means, but aggregates (means) of means. For example, you might want to compare the Placebo mean with the mean of the four drug means. Such comparisons between aggregates are known as COMPLEX comparisons.

56 56 Examples

57 57 Non-independent comparisons The simple comparison of M 5 with M 1 and the complex comparison are not independent or ORTHOGONAL. The value of M 5 feeds into the value of the average of the means for the drug groups.

58 58 Systems of comparisons We shall now look at the properties of different sets or systems of comparisons. Which comparisons are independent or ORTHOGONAL? How much VARIANCE can we attribute to different comparisons? How can we test a comparison for SIGNIFICANCE?

59 59 Linear functions Y is a linear function of X if the graph of Y upon X is a straight line. For example, temperature in degrees Fahrenheit is a linear function of temperature in degrees Celsius.

60 60 A linear (straight line) function The coefficient of C is the SLOPE of the line and the constant is the INTERCEPT, which is 32 0 from the zero point along the vertical (Fahrenheit) axis.

61 61 Equation of a straight line Where b 0 and b are the slope and intercept, respectively.

62 62 The general linear equation You have p independent variables, x 1, x 2, …, x p. The general linear equation is:

63 63 Linear contrasts Any comparison can be expressed as a sum of terms, each of which is a product of a treatment mean and a coefficient such that the coefficients sum to zero. When so expressed, the comparison is a LINEAR CONTRAST, because it has the form of a linear function. It looks artificial at first, but this notation enables us to study the properties of systems of comparisons among the treatment means.

64 64 The complex comparison of the Placebo mean with the mean of the means of the four drug conditions can be expressed as a linear function of the five treatment means …

65 65 Notice that the coefficients sum to zero

66 66 Notation for a contrast The Greek symbol psi is often used to denote the value of a contrast.

67 67 More compactly, if there are k treatment groups, we can write

68 68 A more compact formula There are k terms in this summation.

69 69 Simple contrasts: Include all the means

70 70 Drop the Ms and just work with the coefficients

71 71 Helmert contrasts Compare the first mean with the mean of the other means. Drop the first mean and compare the second mean with the mean of the remaining means. Drop the second mean. Continue until you arrive at a comparison between the last two means.

72 72 Helmert contrasts… Our first contrast is 1, -¼, -¼, -¼, -¼ Our second contrast is 0, 1, -, -, - Our third contrast is 0, 0, 1, -½, -½ Our fourth (and final) contrast is 0, 0, 0, 1, -1

73 73 Getting rid of the fractions

74 74 Orthogonal contrasts The first Helmert contrast in no way constrains the value of the second, because the first mean has been dropped. The first two contrasts do not affect the third, because the first two means have been dropped. This is a set of four independent or ORTHOGONAL contrasts.

75 75 The orthogonal property As with all contrasts, the coefficients in each row sum to zero. In addition, the sum of the PRODUCTS OF CORRESPONDING COEFFICIENTS in any pair of rows is zero. This means that we have an ORTHOGONAL contrast set.

76 76 Size of an orthogonal set In general, for an array of k means, you can construct a set of, at most, k – 1 orthogonal contrasts. In the present ANOVA example, k = 5, so the rule tells us that there can be no more than 4 orthogonal contrasts in the set. Several different orthogonal sets, however, can often be constructed for the same set of means.

77 77 Contrast sums of squares We have seen that in the one-way ANOVA, the value of SS between reflects the sizes of the differences among the treatment means. In the same way, it is possible to measure the importance of a contrast by calculating a sum of squares which reflects the variation attributable to that contrast alone We can use an F statistic to test each contrast for significance.

78 78 Formula for a contrast sum of squares

79 79 Formula for a contrast sum of squares In the numerator, the value of the contrast is squared and multiplied by the number of participants in the group. In the denominator, is the sum of the squares of the coefficients in the contrast.

80 80 Here, once again, is our set of Helmert contrasts, to which I have added the values of the five treatment means

81 81 Helmert contrast coefficients with means added

82 82 Add the sum of the squared coefficients

83 83 Add the sum of products

84 84 Value of the sum of squares Just plug the values into the formula.

85 85 Do this for the whole set

86 86 Non-orthogonal contrasts Contrasts dont have to be independent. For example, you might wish to compare each of the four drug groups with the Placebo group. What you want are SIMPLE CONTRASTS.

87 87 Simple contrasts These are linear contrasts – each row sums to zero. But they are not orthogonal – in any pairing of rows, the sum of products of corresponding coefficients is not zero. The contrast sums of squares will not sum to the between groups sum of squares.

88 88 Testing a contrast sum of squares for significance

89 89 Two approaches A contrast is a comparison between two means. You can therefore make an F test or you can make a t test. The two tests are equivalent.

90 90 Degrees of freedom of a contrast sum of squares A contrast sum of squares compares two means; even though one mean may be an aggregate of several others. A contrast sum of squares, therefore, has ONE degree of freedom, because the two deviations from the grand mean sum to zero.

91 91 Since a contrast sum of squares has one degree of freedom, …

92 92 Degrees of freedom of F The numerator df = 1. The denominator df is just the df for the error term in the full ANOVA (45)

93 93 Testing the first Helmert contrast

94 94 Traditional t test formula

95 95 The equal-n case With equal groups, the formula for t simplifies to:

96 96 In the context of ANOVA This test incorporates the pooled estimate of the population variance, with greater degrees of freedom. The test has more POWER than one based only on the data from two groups.

97 97 Modifying the t statistic

98 98 Applying the formula

99 99 Equivalence of the F and t tests

100 100 Heterogeneity of variance: the Behrens-Fisher statistic With marked heterogeneity of variance, the usual pooling of the sample variances is abandoned and the test statistic is:

101 101 The Behrens-Fisher problem What is the value of the degrees of freedom of T? If n 1 > n 2, the degrees of freedom of T lies between n 2 – 1 and the traditional value of n 1 + n 2 – 2. Several solutions to the problem of finding the degrees of freedom more precisely have been proposed.

102 102 The Welch-Satterthwaite solution This is the solution used by PASW. Others have been proposed more recently.

103 103 In this case, testing with T doesnt make much difference

104 104 Summary A contrast is a comparison between two means, so its sum of squares has ONE degree of freedom. The contrasts can therefore be tested with either F or t. (F = t 2.) If the contrasts (Helmert) form an orthogonal set, the contrast sums of squares sum to the value of SS between.

105 105 Coffee break

106 106 PASW Statistics 17

107 107 Comparison with SPSS 16 From the users point of view, PASW Statistics 17 is similar to SPSS 16. Some commands now appear in different menus than they did in SPSS 16. There are some new options. Some old problems and pitfalls remain. The same general user guidelines apply to all versions of PASW and SPSS.

108 108 The Data Editor In SPSS/PASW, there are two display modes: 1. VARIABLE VIEW. This contains information about the variables in your data set. 2. DATA VIEW. This spreadsheet-like display contains your numerical data, which are referred to as values by PASW: as far as we are concerned, values are always numbers. (We shall not be working with string variables such as lists of names or cities.) WORK IN VARIABLE VIEW FIRST, because 1.that will make it much easier to enter and view your data in Data View. can improve the quality of the output.

109 109 Variable names versus variable labels Variable names are for your convenience while working in the Data Editor. They can be very cryptic, so long as you know what they mean. Variable names appear only in Data View. Variable LABELS are transparent and should be understandable by an outsider. They will appear in the output.

110 110 Grouping variables: assign value labels Values are numbers. They might be scores; but they also include the arbitrary code numbers making up GROUPING VARIABLES. Value labels tell you what the code numbers mean. They are ESSENTIAL. If you do not assign value labels to the code numbers and you leave the data file for some time, you wont remember whether 1 and 2 were male and female, respectively, or vice versa.

111 111 Assigning value labels to values while in Variable View

112 112 Displaying value labels You can see the labels in Data View by clicking on the luggage ticket in the View menu.

113 113 Levels of measurement SPSS/PASW classifies data according to the LEVEL OF MEASUREMENT. There are 3 levels: 1.SCALE data, which are measures on an independent scale with units. Heights, weights, performance scores, counts and IQs are scale data. Each score has stand-alone meaning. Two more or less equivalent terms are CONTINUOUS and INTERVAL. 2.ORDINAL data, which are RANKS. A rank has meaning only in relation to the other individuals in the sample. It has no stand-alone meaning. 3.NOMINAL data, which are assignments to categories. (So-many males, so-many females.) Nominal data are records of CATEGORICAL or QUALITATIVE variables.

114 114 Specifying the level of measurement

115 115 The drug experiment again Each column contains the scores of the ten participants who were tested under one particular condition. Notice that each line contains data from several subjects/participants. This wont do for PASW!

116 116 SPSS/PASW data sets One line for each case/participant/subject. The columns are variables. We need a variable for the scores. We need a grouping variable to indicate treatment group or category membership. For the data from the drug experiment, we shall need only two variables (columns): one for the scores, the other for group membership.

117 117 Variable View completed

118 118 Alternative Data Views

119 119 Points As always with PASW data files, each row contains data on one person only. Only two columns are needed. One column contains code numbers (values) indicating group membership. The other contains the scores that the participants achieved.

120 120 Finding the one-way ANOVA

121 121 The one-way dialog

122 122 Options

123 123 Beware of the means plot! With some default graphs, SPSS/PASW follows aesthetic guidelines. As a result, some graphs can be highly misleading. Here is the means plot for another data set from an experiment of the same design.

124 124 A means plot

125 125 A false picture! The table of means shows miniscule differences among the five group means. The p-value of F is very high – unity to two places of decimals. Nothing going on here!

126 126 A microscopic vertical scale Only a microscopically small section of the scale is shown on the vertical axis: 10.9 to 11.4! Even small differences among the group means look huge.

127 127 Into the Chart Editor Double-click on the image to get into the Chart Editor. Double-click on the vertical axis to access the scale specifications.

128 128 Specify zero as the minimum point Uncheck the minimum value box and enter zero as the desired minimum point. Click Apply.

129 129 Zero point now included The effect is dramatic! The profile is now as flat as a pancake. The graph now accurately depicts the results. Always be suspicious of graphs that do not show the ZERO POINT on the VERTICAL SCALE.

130 130 Simple contrasts with SPSS Here are the entries for the first contrast, which is between the Placebo and Drug A groups. Notice that you must enter five coefficients. Below that is the entry for the final contrast between the Placebo and Drug D groups.

131 131 The results In the column headed Value of Contrast, are the differences between pairs of treatment means. For example, Drug A mean minus Placebo mean = = Drug D – Placebo = – 8.00 = 5.00.

132 132 Save time with Syntax! A computing package can be told to run specified analyses by means of a system of written instructions known as CONTROL LANGUAGE. In PASW/SPSS, the language is known as SYNTAX. If you have to run the same procedure again and again, you should create the appropriate syntax file and save it. At your next session, you need only open the data file and run the whole analysis using the syntax file instead of filling in all the boxes and pressing all the buttons again.

133 133 The Paste button In the ANOVA dialog, simply transfer the group and score variables. That orders the basic analysis. Click the Paste button at the foot of the One-way ANOVA dialog.

134 134 Pasting the syntax

135 135 The basic one-way ANOVA syntax

136 136 Structure of a Syntax command When the command keyword is blue, the command has the correct syntax and should run.

137 137 Check that the data set is active

138 138 Choose some options Observe what happens to the syntax file when we press the appropriate buttons and choose a means plot, descriptives, Tukey multiple-comparisons and order Helmert planned contrasts.

139 139 I press buttons to order Descriptives and a profile plot.

140 140 Descriptives and means plots

141 141 I click the Post Hoc button and select Tukey multiple pairwise comparisons.

142 142 Tukey tests added

143 143 Helmert contrasts Lets order a set of Helmert contrasts. Click the Contrasts button and proceed as follows:

144 144 The Contrasts dialog

145 145 To enter the whole set … Dont return to the One-way ANOVA dialog yet. Click the Next button and enter the next row of contrast coefficients. Continue until the whole set of contrasts has been entered, then click the Continue button to return to the One-way ANOVA dialog. Now look at the syntax file.

146 146 Syntax for Helmert contrasts

147 147 Dont write, just paste! Notice that I havent written anything. I have produced all this syntax merely by pressing the Paste button. But I can easily adapt the analysis for other variables and data sets.

148 148 In summary The one-way ANOVA tests the null hypothesis of equality of ALL the means by comparing the variance between groups with the variance within groups. Further analysis is necessary for the testing of comparisons among individual group means. Report the results of your analysis in APA format, following Lisa DeBruines guidelines. In SPSS/PASW, work in Variable View first. In the ANOVA output, watch out for the profile plots. Save time with syntax!

Download ppt "1 My contact details Colin Gray Room S2 (Thursday mornings, especially) address: Telephone: (27) 2234 Dont hesitate."

Similar presentations

Ads by Google