Presentation is loading. Please wait.

Presentation is loading. Please wait.

Your lecturer and course convener

Similar presentations


Presentation on theme: "Your lecturer and course convener"— Presentation transcript:

1 Your lecturer and course convener
Colin Gray Room S16 William Guild Building address: Telephone: (27) 2233

2 THE ANALYSIS OF PSYCHOLOGICAL DATA

3 SPSS course The Level Three SPSS course starts this week.
The workshops have been designed to illustrate the material I shall be covering in the lectures.

4 SPSS updates SPSS frequently brings out new versions.
The university is running SPSS 14 this year, but it is very similar to SPSS 13. In these lectures, I shall draw your attention to any important differences between SPSS 14 and SPSS 13.

5 Required reading Kinnear, P. R., & Gray, C. D. (2004). SPSS 12 made simple. Hove & New York: Psychology Press. OR Kinnear, P. R., & Gray, C. D. (2006). SPSS 14 made simple. Hove & New York: Psychology Press.

6 Recommended reading Howell, D. C. (2007). Statistical methods for psychology (6th ed.). Belmont, CA: Thomson/Wadsworth. This is an excellent textbook, which provides an in-depth coverage of all the topics in my lectures - and much else besides.

7 Suggestion The schedule for Methodology A identifies the exercises that will be included in the SPSS workshop each week. You would find it helpful to read the relevant chapter and the exercises before attending the class.

8 The examination Multiple-choice format.
Plenty of examples during the lectures. PRACTISE AND TEST YOURSELF AS YOU GO ALONG. Last year’s papers are in the library.

9 Lecture 1 REVISION

10 Results of the Caffeine experiment

11 Three important features
The three most important features of a distribution are: the typical value, or AVERAGE, as measured by the MEAN, the MEDIAN or the MODE; the SPREAD or DISPERSION of scores about the average (VARIANCE, STANDARD DEVIATION) Its SHAPE, which is explored by graphing the data.

12 The mean The MEAN of a set of n numbers X is their sum, divided by n.

13 Calculating the means

14 The variance and standard deviation

15 Deviations from the mean sum to zero
Zero deviations -ve deviations +ve deviations The mean is the centre of gravity, or balance point. The deviations are the distances of the points from the balance point. They must sum to zero if balance is to be maintained: the positives and negatives must cancel each other out.

16 Variance of scores in the caffeine group

17 Statistical summary

18 Populations and samples
We have some data: 20 scores from the Caffeine group, 20 from the Placebo group. The POPULATION is the reference set of ALL such data that might be gathered. Our data make up only a selection or SAMPLE from the population. Sampling entails SAMPLING VARIABILITY. So if someone else did the experiment, the Placebo and Caffeine means (and the difference between them) would have different values.

19 Statistics and parameters
A STATISTIC is a property of the SAMPLE; whereas a PARAMETER is a property of the POPULATION. The values of the mean and standard deviation of a sample are ESTIMATES of the corresponding population parameters. Because of sampling variability, however, sample statistics are subject to ERROR.

20 Notational convention
We use Arabic letters to denote the statistics of the sample; we use Greek letters to denote PARAMETERS, that is, the corresponding characteristics of the population.

21 Hypothesis A statistical HYPOTHESIS is a statement about a population.
The NULL HYPOTHESIS (H0) is usually the negation of the researcher’s SCIENTIFIC HYPOTHESIS. It is the null hypothesis that is tested: if it fails the test, the scientific hypothesis is supported.

22 The null hypothesis The null hypothesis (H0) states that, in the population, the Caffeine and Placebo means have the same value. H0: μ1 = μ2 15

23 The alternative hypothesis
The alternative hypothesis (H1) states that the Caffeine and Placebo means are not equal.

24 Independent samples The participants were RANDOMLY ASSIGNED to the Caffeine and Placebo conditions. The experiment yielded two sets of scores, one set from the Caffeine group, the other from the Placebo group. There is NO BASIS FOR PAIRING THE SCORES. We have INDEPENDENT SAMPLES.

25 Normal distribution Classical statistical inference is based upon the NORMAL DISTRIBUTION, which is symmetrical and bell-shaped. A normal distribution is centred on the population mean. If X is normally distributed, 95% of values lie within 1.96 standard deviations on either side of the mean. X ~ N(μ, σ2) 0.95 μ μ – 1.96σ μ +1.96σ

26 The standard normal variable Z

27 Standard normal distribution
The standard normal variable Z has a mean of zero and a standard deviation of 1. Any normal variable X can be transformed to Z by subtracting the mean and dividing by the standard deviation. Z~N(0, 1) 0.95 -1.96 +1.96

28 Sampling distribution
A SAMPLING DISTRIBUTION is the distribution of a statistic such as the mean or the difference between means. The standard deviation of a sampling distribution is known as the STANDARD ERROR of the statistic concerned. If the original population is normal, the sampling distribution is normal and 95% of values lie within 1.96 standard errors of the mean on either side.

29 Sampling distribution of a statistic
If the population is normal, so is the sampling distribution. The statistic can be transformed to the standard normal variable Z, where 0.95 mean – 1.96SE mean +1.96SE

30 Standard errors of the mean (SEM) and the difference between means (SED)

31 Can we test H0 with Z ? Why not use the statistic Z?
Z is the number of standard errors (of the difference) by which the obtained difference deviates from the hypothetical mean of zero. It’s a kind of DEVIATION SCORE. The difficulty is that, in practice, WE NEVER KNOW THE POPULATION VARIANCE σ2.

32 The t statistic In the present example (where n1 = n2), the pooled estimate s2 of σ2 is simply the mean of the variance estimates from the two samples.

33 Pooled variance estimate
The value of t We do not know the supposedly constant population variance σ2. Our estimate of σ2 is Pooled variance estimate

34 Significance Is the value –2.6 significant? Have we evidence against the null hypothesis? To answer that question, we need to locate the obtained value of t in the appropriate SAMPLING DISTRIBUTION of t. That distribution is identified by the value of the parameter known as the DEGREES OF FREEDOM.

35 Degrees of freedom The df is the total number of scores minus two.
In our example, df = – 2 = 38. WHETHER A GIVEN VALUE OF t IS SIGNIFICANT WILL DEPEND UPON THE VALUE OF THE DEGREES OF FREEDOM.

36 Appearance of a t distribution
A t distribution is very like the standard normal distribution. The difference is that a t distribution has thicker tails. In other words, large values of t are more likely than large values of Z. With large df, the two distributions are very similar. Z~N(0, 1) .95 .95 t(2)

37 The critical region We shall reject the null hypothesis if the value of t falls within EITHER tail of the t distribution on 38 degrees of freedom. To be significant beyond the .05 level, our value of t must be EITHER greater than OR less than –2.02. To be significant beyond the .01 level, our value of t must be EITHER greater than OR smaller than –2.704.

38 The two-tailed p-value
The p-value is the probability, ASSUMING THAT THE NULL HYPOTHESIS IS TRUE, of obtaining a value of the test statistic at least as EXTREME (under the null hypothesis) as the one you obtained. If the p-value is less than .05, you are in the critical region, and your value of t is significant beyond the .05 level. Pr of a value at least as small as yours

39 Result of the t test The p-value of –2.6 is .01 (to 2 places of decimals). Our t test has shown significance beyond the .05 level. But the p-value is greater than .01, so the result, although significant beyond the .05 level, is not significant beyond the .01 level.

40 Remember! A SMALL p-value (p < .05) is evidence AGAINST the null hypothesis. A large p-value (say .7 or higher) would have been consistent with the null hypothesis and your scientific hypothesis would have received no confirmation.

41 The p-value is expressed to two places of decimals
Your report “The scores of the Caffeine group (M = 11.90; SD = 3.28) were significantly higher than those of the Placebo group (M = 9.25; 3.16): t(38) = 2.60; p = ” The p-value is expressed to two places of decimals degrees of freedom value of t

42 Representing very small p-values
Suppose, in the caffeine experiment, that the p-value had been very small indeed. (Suppose t = 6.0). The computer would have given your p-value as ‘.000’. Never write, ‘p = .000’. This is unacceptable in a scientific article. Write, ‘p < .01’. You would have written the result of the test as ‘ t(38) = 6.0; p < .01’ .

43 Homogeneity of variance
In the classical independent t-test, the two variance estimates are averaged or POOLED to produce a single estimate of the supposedly HOMOGENEOUS population variance σ2. A marked difference between sample variances suggests HETEROGENEITY OF VARIANCE.

44 Equal variances not assumed
The LEVENE TEST can be used to test for heterogeneity of variance. If the data fail the Levene test, another test statistic t* is used to test the null hypothesis of equality of the means. There is no pooling of the sample variances. The df usually has a smaller value than n1+n2–2.

45 SPSS tip In SPSS, there are two display modes:
1. Variable View. This contains information about the variables in your data set. 2. Data View. This contains your data (referred to as ‘values’ by SPSS). WORK IN VARIABLE VIEW FIRST, because that will make it much easier to enter your data in Data View.

46 Grouping variables In the caffeine experiment, there were two GROUPS of participants: the Caffeine group and the Placebo group. In SPSS, a GROUPING VARIABLE is a set of code numbers (values) which identifies the group to which a participant belongs. In this example, the SPSS data set must contain a grouping variable.

47 Grouping variable … The original table of results contained only the participants’ scores. The Group (grouping) variable labels each score according to the group to which the participant belonged: 0 = Placebo; 1 = Caffeine.

48 In Variable View, Tell SPSS about the grouping variable.
Tell SPSS what the numbers (values) mean by assigning VALUE LABELS.

49 Variable View Devise clear variable NAMES.
The Values are the numerical data themselves. Have informative variable LABELS. Assign VALUE LABELS to the code numbers (values) of your grouping variable. Avoid clutter: set DECIMALS to zero.

50 Levels of measurement SPSS classifies data according to the LEVEL OF MEASUREMENT. There are 3 levels: SCALE data, which are measures on an independent scale with units. Heights, weights, performance scores, counts and IQs are scale data. Each score has ‘stand-alone’ meaning. ORDINAL data, which are ranks. A rank has meaning only in relation to the other individuals in the sample. It does not express the degree to which a property is possessed. NOMINAL data, which are assignments to categories. (So-many males, so-many females.)

51 Graphics The latest SPSS graphics require you to specify the level of measurement of the data on each variable. The group code numbers are at the NOMINAL level of measurement, because they are merely CATEGORY LABELS. Make the appropriate entry in the Measure column.

52 In Data View When you start in Variable View, everything will already be set up for you when you transfer to Data View. It helps to view your value labels (Choose View→Value Labels) to check that all is well.

53 Finding the independent-samples t test
It’s under Compare Means. Notice that, in the Compare Means menu, there is also the Paired-Samples t test.

54 Transferring the variable names
The ‘Test Variable(s)’ is the dependent variable (Score, in our example). You will not be allowed to proceed until you have Defined your groups.

55 Defining your groups Make sure that the code numbers you enter are the ones you chose for your grouping variable. If all is well, you will be returned to the Independent-samples t test dialog box.

56 The p-value is .013, so t is significant beyond the .05 level.
The SPSS results table The p-value is .013, so t is significant beyond the .05 level. Levene’s F has a high p-value (.751), therefore equal variances can be assumed

57 Parametric tests Some statistical models carry more assumptions than others. The t-test, for example, makes assumptions not only about the manner in which the populations are distributed (they are assumed to be normally distributed), but also about the PARAMETERS of the distributions (they must have the same variance). The t -test is thus a PARAMETRIC test.

58 Nonparametric tests Other tests make fewer assumptions about the parent populations. They are known as NONPARAMETRIC or DISTRIBUTION-FREE tests. They do not assume homogeneity of variance. They do not assume normality of distribution. You will make the acquaintance of some nonparametric tests in the SPSS class.

59 The Mann-Whitney U test
We could have tested the difference between the performance levels of the Caffeine and Placebo groups by using the Mann-Whitney U test, which is the nonparametric equivalent of the independent t-test. You will be applying the Mann-Whitney test in the SPSS practical class.

60 A repeated measures or within subjects experiment
A researcher hypothesises that words should be more quickly recognised when they are presented in the right visual field. Participants are presented with words in their left and right visual fields, and the recognition times are recorded. This experiment is said to be of WITHIN SUBJECTS design, or to have REPEATED MEASURES.

61 Related samples Two samples are said to be RELATED if there is basis for pairing the scores. In this example, there is such a basis. This experiment results in two DEPENDENT or RELATED samples of scores, because the SAME PEOPLE are tested in the two situations. You can expect a strong positive correlation between the scores for the right and left hemifields. In fact, in this case, r =

62 Related-samples t test
We can test the difference between the mean recognition times for words in the left and right hemifields by making a RELATED-SAMPLES T-TEST. It is easier to enter the data than when making an independent samples t test. There is no need for a grouping variable. Just follow the instructions at the SPSS workshop.

63 Summary In the lectures, there is insufficient time to consider all the techniques you will encounter in the SPSS workshops. The general principles of hypothesis testing apply to ALL tests. All tests require a TEST STATISTIC, with a known SAMPLING DISTRIBUTION. There is a CRITICAL REGION containing values that are regarded as evidence against the null hypothesis.

64 Summary … The p-value of the test statistic is the probability, assuming that the null hypothesis is true, that a value AT LEAST AS EXTREME would be obtained. If the p-value is less than .05, we are in the critical region and the null hypothesis is rejected.

65 Summary … In SPSS, pave the way for data entry by working in Variable View first and giving full and thoughtful specifications of your variables. This applies to ALL the techniques you will be using in the SPSS workshop. When you have grouped data, you will need a GROUPING VARIABLE; with repeated measures, you will not.

66 Problems If anything in either the lectures or the SPSS workshops is unclear to you, don’t hesitate to me. Should I receive several s about the same problem, I may, at the next lecture, devote some time to providing an explanation.

67 Key terms variance, standard deviation
null hypothesis, alternative hypothesis independent and related samples normal and standard normal variables: the Z statistic standard error of the mean and the difference (between means) the t statistic pooled variance estimate parameters degrees of freedom t distribution versus standard normal distribution

68 Key terms … critical region 2-tailed p-value homogeneity of variance
grouping variable levels of measurement nonparametric tests Mann-Whitney U test related-samples t test


Download ppt "Your lecturer and course convener"

Similar presentations


Ads by Google