Download presentation

Presentation is loading. Please wait.

Published byDarien Knightly Modified over 2 years ago

1
1 Applying Regression

2
2 The Course 14 (or so) lessons â€“Some flexibility Depends how we feel What we get through

3
3 Part I: Theory of Regression 1.Models in statistics 2.Models with more than one parameter: regression 3.Samples to populations 4.Introducing multiple regression 5.More on multiple regression

4
4 Part 2: Application of regression 6.Categorical predictor variables 7.Assumptions in regression analysis 8.Issues in regression analysis 9.Non-linear regression 10.Categorical and count variables 11.Moderators (interactions) in regression 12.Mediation and path analysis Part 3:Taking Regression Further (Kind of brief) 13.Introducing longitudinal multilevel models

5
Bonuses Bonus lesson1: Why is it called regression? Bonus lesson 2: Other types of regression. 5

6
6 House Rules Jeremy must remember â€“Not to talk too fast If you dont understand â€“Ask â€“Any time If you think Im wrong â€“Ask. (Im not always right)

7
The Assistants Carla Xena - Eugenia Suarez Moran Arian Daneshmand-

8
8 Learning New Techniques Best kind of data to learn a new technique â€“Data that you know well, and understand Your own data â€“In computer labs (esp later on) â€“Use your own data if you like My data â€“Ill provide you with â€“Simple examples, small sample sizes Conceptually simple (even silly)

9
9 Computer Programs Stata â€“Mostly Ill explain SPSS options Youll like Stata more Excel â€“For calculations â€“Semi-optional GPower

10
10 Lesson 1: Models in statistics Models, parsimony, error, mean, OLS estimators

11
11 What is a Model?

12
12 What is a model? Representation â€“Of reality â€“Not reality Model aeroplane represents a real aeroplane â€“If model aeroplane = real aeroplane, it isnt a model

13
13 Statistics is about modelling â€“Representing and simplifying Sifting â€“What is important from what is not important Parsimony â€“In statistical models we seek parsimony â€“Parsimony simplicity

14
14 Parsimony in Science A model should be: â€“1: able to explain a lot â€“2: use as few concepts as possible More it explains â€“The more you get Fewer concepts â€“The lower the price Is it worth paying a higher price for a better model?

15
15 The Mean as a Model

16
16 The (Arithmetic) Mean We all know the mean â€“The average â€“Learned about it at school â€“Forget (didnt know) about how clever the mean is The mean is: â€“An Ordinary Least Squares (OLS) estimator â€“Best Linear Unbiased Estimator (BLUE)

17
17 Mean as OLS Estimator Going back a step or two MODEL was a representation of DATA â€“We said we want a model that explains a lot â€“How much does a model explain? DATA = MODEL + ERROR ERROR = DATA - MODEL â€“We want a model with as little ERROR as possible

18
18 What is error? Error (e)Model (b 0 ) mean Data (Y)

19
19 How can we calculate the amount of error? Sum of errors? Sum of absolute errors?

20
20 Are small and large errors equivalent? â€“One error of 4 â€“Four errors of 1 â€“The same? â€“What happens with different data? Y = (2, 2, 5) â€“ b 0 = 2 â€“Not very representative Y = (2, 2, 4, 4) â€“b 0 = any value from â€“Indeterminate There are an infinite number of solutions which would satisfy our criteria for minimum error

21
21 Sum of squared errors (SSE)

22
22 Determinate â€“Always gives one answer If we minimise SSE â€“Get the mean Shown in graph â€“SSE plotted against b 0 â€“Min value of SSE occurs when â€“b 0 = mean

23
23

24
24 The Mean as an OLS Estimate

25
25 Mean as OLS Estimate The mean is an Ordinary Least Squares (OLS) estimate â€“As are lots of other things This is exciting because â€“OLS estimators are BLUE â€“Best Linear Unbiased Estimators â€“Proven with Gauss-Markov Theorem Which we wont worry about

26
26 BLUE Estimators Best â€“Minimum variance (of all possible unbiased estimators) â€“Narrower distribution than other estimators e.g. median, mode

27
27 SSE and the Standard Deviation Tying up a loose end

28
28 SSE closely related to SD Sample standard deviation â€“ s â€“Biased estimator of population SD Population standard deviation - â€“Need to know the mean to calculate SD Reduces N by 1 Hence divide by N -1, not N â€“Like losing one df

29
29 Proof That the mean minimises SSE â€“Not that difficult â€“As statistical proofs go Available in â€“Maxwell and Delaney â€“ Designing experiments and analysing data â€“Judd and McClelland â€“ Data Analysis: a model comparison approach (out of print?)

30
30 Whats a df? The number of parameters free to vary â€“When one is fixed Term comes from engineering â€“Movement available to structures

31
31 Back to the Data Mean has 5 ( N ) df â€“ 1 st moment has N â€“1 df â€“Mean has been fixed â€“2 nd moment â€“Can think of it as amount of cases vary away from the mean

32
32 While we are at it â€¦ Skewness has N â€“ 2 df â€“3 rd moment Kurtosis has N â€“ 3 df â€“4 rd moment â€“Amount cases vary from

33
33 Parsimony and df Number of df remaining â€“Measure of parsimony Model which contained all the data â€“Has 0 df â€“Not a parsimonious model Normal distribution â€“Can be described in terms of mean and 2 parameters â€“( z with 0 parameters)

34
34 Summary of Lesson 1 Statistics is about modelling DATA â€“Models have parameters â€“Fewer parameters, more parsimony, better Models need to minimise ERROR â€“Best model, least ERROR â€“Depends on how we define ERROR â€“If we define error as sum of squared deviations from predicted value â€“Mean is best MODEL

35
Lesson 1a A really brief introduction to Stata 35

36
Commands 36 Command review Variable list Commands Output

37
Stata Commands Can use menus â€“But commands are easy All have similar format: command variables, options Stata is case sensitive â€“BEDS, beds, Beds Stata lets you shorten â€“summarize sqft â€“su sq 37

38
More Stata Commands Open exercise 1.4.dta â€“Run summarize sqm table beds mean price histogram price â€“Or su be tab be mean pr hist pr 38

39
39 Lesson 2: Models with one more parameter - regression

40
40 In Lesson 1 we said â€¦ Use a model to predict and describe data â€“Mean is a simple, one parameter model

41
41 More Models Slopes and Intercepts

42
42 More Models The mean is OK â€“As far as it goes â€“It just doesnt go very far â€“Very simple prediction, uses very little information We often have more information than that â€“We want to use more information than that

43
43 House Prices Look at house prices in one area of Los Angeles Predictors of house prices Using: â€“Sale price, size, number of bedrooms, size of lot, year built â€¦

44

45

46
46 House Prices address listpricebedsbathssqft 3628 OLYMPIAD Dr OLYMPIAD Dr CHANSON Dr West 58TH Pl West 58TH Pl FAIRWAY Blvd OLYMPIAD Dr DON LUIS Dr West 59TH St WHELAN Pl West 63RD St ANGELES VISTA Blvd

47
47 One Parameter Model The mean How much is that house worth? $415,689 Use 1 df to say that

48
48 Adding More Parameters We have more information than this â€“We might as well use it â€“Add a linear function of number of size (square feet) ( x 1 )

49
49 Alternative Expression Estimate of Y (expected value of Y ) Value of Y

50
50 Estimating the Model We can estimate this model in four different, equivalent ways â€“Provides more than one way of thinking about it 1. Estimating the slope which minimises SSE 2. Examining the proportional reduction in SSE 3. Calculating the covariance 4. Looking at the efficiency of the predictions

51
51 Estimate the Slope to Minimise SSE

52
52 Estimate the Slope Stage 1 â€“Draw a scatterplot â€“ x -axis at mean Not at zero Mark errors on it â€“Called residuals â€“Sum and square these to find SSE

53
53

54

55
55 Add another slope to the chart â€“Redraw residuals â€“Recalculate SSE â€“Move the line around to find slope which minimises SSE Find the slope

56
56 First attempt:

57
57 Any straight line can be defined with two parameters â€“The location (height) of the slope b 0 â€“Sometimes called â€“The gradient of the slope b 1

58
58 Gradient 1 unit b 1 units

59
59 Height b 0 units

60
60 Height If we fix slope to zero â€“Height becomes mean â€“Hence mean is b 0 Height is defined as the point that the slope hits the y -axis â€“The constant â€“The y -intercept

61
61 Why the constant? â€“ b 0 x 0 â€“Where x 0 is 1.00 for every case i.e. x 0 is constant Implicit in Stata â€“(And SPSS, SAS, R) â€“Some packages force you to make it explicit â€“(Later on well need to make it explicit)

62
62 Why the intercept? â€“Where the regression line intercepts the y - axis â€“Sometimes called y -intercept

63
63 Finding the Slope How do we find the values of b 0 and b 1 ? â€“Start with we jiggle the values, to find the best estimates which minimise SSE â€“Iterative approach Computer intensive â€“ used to matter, doesnt really any more (With fast computers and sensible search algorithms â€“ more on that later)

64
64 Start with â€“ b 0 =416 (mean) â€“ b 1 =0.5 (nice round number) SSE = 365,774 â€“ b 0 =300, b 1 =0.5, SSE=341,683 â€“b 0 =300, b 1 =0.6, SSE=310,240 â€“b 0 =300, b 1 =0.8, SSE=264,573 â€“b 0 =300, b 1 =1, SSE=301, 797 â€“b 0 =250, b 1 =1, SSE=255,366 â€“â€¦..

65
65 Quite a long time later â€“ b 0 = â€“ b 1 = â€“ SSE = 145, Gives the position of the â€“Regression line (or) â€“Line of best fit Better than guessing Not necessarily the only method â€“But it is OLS, so it is the best (it is BLUE)

66
66

67
67 We now know â€“A zero square metre house is worth $216,000 â€“Adding a square meter adds $1,080 Told us two things â€“Dont extrapolate to meaningless values of x -axis â€“Constant is not necessarily useful It is necessary to estimate the equation

68
Exercise 2a, 2b 68

69
69 Standardised Regression Line One big but: â€“Scale dependent Values change â€“Â£ to, inflation Scales change â€“Â£, Â£000, Â£00? Need to deal with this

70
70 Dont express in raw units â€“Express in SD units â€“ x 1 = â€“ y = b 1 = We increase x 1 by 1, and Å¶ increases by So we increase x 1 by 1 and Å¶ increases by SDs

71
71 Similarly, 1 unit of x 1 = 1/ SDs â€“Increase x 1 by 1 SD â€“Å¶ increases by (69.017/1) = Put them both together

72
72 The standardised regression line â€“Change (in SDs) in Å¶ associated with a change of 1 SD in x 1 A different route to the same answer â€“Standardise both variables (divide by SD) â€“Find line of best fit

73
73 The standardised regression line has a special name The Correlation Coefficient ( r ) (r stands for regression, but more on that later) Correlation coefficient is a standardised regression slope â€“Relative change, in terms of SDs

74
Exercise 2c 74

75
75 Proportional Reduction in Error

76
76 Proportional Reduction in Error We might be interested in the level of improvement of the model â€“How much less error (as proportion) do we have â€“Proportional Reduction in Error (PRE) Mean only â€“Error(model 0) = 341,683 Mean + slope â€“Error(model 1) = 196,046

77
77

78
78 But we squared all the errors in the first place â€“So we could take the square root This is the correlation coefficient Correlation coefficient is the square root of the proportion of variance explained

79
79 Standardised Covariance

80
80 Standardised Covariance We are still iterating â€“Need a closed-form equation â€“Equation to solve to get the parameter estimates Answer is a standardised covariance â€“A variable has variance â€“Amount of differentness We have used SSE so far

81
81 SSE varies with N â€“Higher N, higher SSE Divide by N â€“Gives SSE per person (or house) â€“(Actually N â€“ 1, we have lost a df to the mean) Gives us the variance Same as SD 2 â€“We thought of SSE as a scattergram Y plotted against X â€“(repeated image follows)

82
82

83
83 Or we could plot Y against Y â€“Axes meet at the mean (415) â€“Draw a square for each point â€“Calculate an area for each square â€“Sum the areas Sum of areas â€“SSE Sum of areas divided by N â€“Variance

84
84 Plot of Y against Y

85
Draw Squares 35 â€“ 88.9 = â€“ 88.9 = â€“ 88.9 = â€“ 88.9 = 40.1 Area = x = Area = 40.1 x 40.1 =

86
86 What if we do the same procedure â€“Instead of Y against Y â€“Y against X Draw rectangles (not squares) Sum the area Divide by N - 1 This gives us the variance of x with y â€“The Covariance â€“Shortened to Cov( x, y )

87
87

88
88 55 â€“ 88.9 = = -2 Area = (-33.9) x (-2) = = = 49.1 Area = 49.1 x 1 = 49.1

89
89 More formally (and easily) We can state what we are doing as an equation â€“Where Cov( x, y ) is the covariance Cov( x, y )=5165 What do points in different sectors do to the covariance?

90
90 Problem with the covariance â€“Tells us about two things â€“The variance of X and Y â€“The covariance Need to standardise it â€“Like the slope Two ways to standardise the covariance â€“Standardise the variables first Subtract from mean and divide by SD â€“Standardise the covariance afterwards

91
91 First approach â€“Much more computationally expensive Too much like hard work to do by hand â€“Need to standardise every value Second approach â€“Much easier â€“Standardise the final value only Need the combined variance â€“Multiply two variances â€“Find square root (were multiplied in first place)

92
92 Standardised covariance

93
93 The correlation coefficient â€“A standardised covariance is a correlation coefficient

94
94 Expanded â€¦

95
95 This means â€¦ â€“We now have a closed form equation to calculate the correlation â€“Which is the standardised slope â€“Which we can use to calculate the unstandardised slope

96
96 We know that:

97
97 So value of b 1 is the same as the iterative approach

98
98 The intercept â€“Just while we are at it The variables are centred at zero â€“We subtracted the mean from both variables â€“Intercept is zero, because the axes cross at the mean

99
99 Add mean of y to the constant â€“Adjusts for centring y Subtract mean of x â€“But not the whole mean of x â€“Need to correct it for the slope

100
100 Accuracy of Prediction

101
101 One More (Last One) We have one more way to calculate the correlation â€“Looking at the accuracy of the prediction Use the parameters â€“ b 0 and b 1 â€“To calculate a predicted value for each case

102
102 Plot actual price against predicted price â€“From the model

103
103

104
104 r = The correlation between actual and predicted value Seems a futile thing to do â€“And at this stage, it is â€“But later on, we will see why

105
105 Some More Formulae For hand calculation Point biserial

106
106 Phi ( ) â€“Used for 2 dichotomous variables Vote PVote Q HomeownerA: 19B: 54 Not homeownerC: 60D:53

107
107 Problem with the phi correlation â€“Unless P x = P y (or P x = 1 â€“ P y ) Maximum (absolute) value is < 1.00 Tetrachoric correlation can be used to correct this Rank (Spearman) correlation â€“Used where data are ranked

108
108 Summary Mean is an OLS estimate â€“OLS estimates are BLUE Regression line â€“Best prediction of outcome from predictor â€“OLS estimate (like mean) Standardised regression line â€“A correlation

109
109 Four ways to think about a correlation â€“1. Standardised regression line â€“2. Proportional Reduction in Error (PRE) â€“3. Standardised covariance â€“4. Accuracy of prediction

110
Regression and Correlation in Stata Correlation: correlate x y correlate x y, cov regress y x Or regress price sqm 110

111
Post-Estimation Stata commands leave behind something You can run post-estimation commands â€“They mean from the last regression Get predicted values: â€“predict my_preds Get residuals: â€“predict my_res, residuals 111 This comes after the comma, so its an option

112
Graphs Scatterplot scatter price beds Regression line â€“lfit price beds Both graphs â€“twoway (scatter price beds) (lfit price beds) 112

113
What happens if you run reg without a predictor? â€“regress price 113

114
Exercises 114

115
115 Lesson 3: Samples to Populations â€“ Standard Errors and Statistical Significance

116
116 The Problem In Social Sciences â€“We investigate samples Theoretically â€“Randomly taken from a specified population â€“Every member has an equal chance of being sampled â€“Sampling one member does not alter the chances of sampling another Not the case in (say) physics, biology, etc.

117
117 Population But its the population that we are interested in â€“Not the sample â€“Population statistic represented with Greek letter â€“Hat means estimate

118
118 Sample statistics (e.g. mean) estimate population parameters Want to know â€“Likely size of the parameter â€“If it is > 0

119
119 Sampling Distribution We need to know the sampling distribution of a parameter estimate â€“How much does it vary from sample to sample If we make some assumptions â€“We can know the sampling distribution of many statistics â€“Start with the mean

120
120 Sampling Distribution of the Mean Given â€“Normal distribution â€“Random sample â€“Continuous data Mean has a known sampling distribution â€“Repeatedly sampling will give a known distribution of means â€“Centred around the true (population) mean ( )

121
121 Analysis Example: Memory Difference in memory for different words â€“10 participants given a list of 30 words to learn, and then tested â€“Two types of word Abstract: e.g. love, justice Concrete: e.g. carrot, table

122
122

123
123 Confidence Intervals This means â€“If we know the mean in our sample â€“We can estimate where the mean in the population ( ) is likely to be Using â€“The standard error (se) of the mean â€“Represents the standard deviation of the sampling distribution of the mean

124
124 Almost 2 SDs contain 95% 1 SD contains 68%

125
125 We know the sampling distribution of the mean â€“ t distributed if N < 30 â€“Normal with large N (>30) Asymptotically normal Know the range within means from other samples will fall â€“Therefore the likely range of

126
126 Two implications of equation â€“Increasing N decreases SE But only a bit (SE halfs if N is 400 times bigger) â€“Decreasing SD decreases SE Calculate Confidence Intervals â€“From standard errors 95% is a standard level of CI â€“95% of samples the true mean will lie within the 95% CIs â€“In large samples: 95% CI = 1.96 SE â€“In smaller samples: depends on t distribution ( df =N-1=9)

127
127

128
128

129
129 What is a CI? (For 95% CI): 95% chance that the true (population) value lies within the confidence interval? No; 95% of samples, true mean will land within the confidence interval?

130
130 Significance Test Probability that is a certain value â€“Almost always 0 Doesnt have to be though We want to test the hypothesis that the difference is equal to 0 â€“ i.e. find the probability of this difference occurring in our sample IF =0 â€“(Not the same as the probability that =0)

131
131 Calculate SE, and then t â€“ t has a known sampling distribution â€“Can test probability that a certain value is included

132
132 Other Parameter Estimates Same approach â€“Prediction, slope, intercept, predicted values â€“At this point, prediction and slope are the same Wont be later on One predictor only â€“More complicated with > 1

133
133 Testing the Degree of Prediction Prediction is correlation of Y with Å¶ â€“The correlation â€“ when we have one IV Use F, rather than t Started with SSE for the mean only â€“This is SS total â€“Divide this into SS residual â€“SS regression SS tot = SS reg + SS res

134
134

135
135 Back to the house prices â€“Original SSE (SS total ) = â€“SS residual = What is left after our model â€“SS regression = â€“ = What our model explains

136
136

137
137 F = 18.6, df = 1, 25, p = â€“Can reject H 0 H 0 : Prediction is not better than chance â€“A significant effect

138
138 Statistical Significance: What does a p-value (really) mean?

139
139 A Quiz Six questions, each true or false Write down your answers (if you like) An experiment has been done. Carried out perfectly. All assumptions perfectly satisfied. Absolutely no problems. P = 0.01 â€“Which of the following can we say?

140
You have absolutely disproved the null hypothesis (that is, there is no difference between the population means).

141
You have found the probability of the null hypothesis being true.

142
You have absolutely proved your experimental hypothesis (that there is a difference between the population means).

143
You can deduce the probability of the experimental hypothesis being true.

144
You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

145
You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of times, you would obtain a significant result on 99% of occasions.

146
146 OK, What is a p-value Cohen (1994) [a p-value] does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless believe it does (p 997).

147
147 OK, What is a p-value Sorry, didnt answer the question Its The probability of obtaining a result as or more extreme than the result we have in the study, given that the null hypothesis is true Not probability the null hypothesis is true

148
148 A Bit of Notation Not because we like notation â€“But we have to say a lot less Probability â€“ P Null hypothesis is true â€“ H Result (data) â€“ D Given - |

149
149 Whats a P Value P(D|H) â€“Probability of the data occurring if the null hypothesis is true Not P(H|D) (what we want to know) â€“Probability that the null hypothesis is true, given that we have the data = p(H) P(H|D) P(D|H)

150
150 What is probability you are prime minister â€“Given that you are British â€“P(M|B) â€“Very low What is probability you are British â€“Given you are prime minister â€“P(B|M) â€“Very high P(M|B) P(B|M)

151
151 Theres been a murder â€“Someone murdered an instructor (perhaps they talked too much) The police have DNA The police have your DNA â€“They match(!) DNA matches 1 in 1,000,000 people Whats the probability you didnt do the murder, given the DNA match (H|D)

152
152 Police say: â€“P(D|H) = 1/1,000,000 Luckily, you have Jeremy on your defence team We say: â€“P(D|H) P(H|D) Probability that someone matches the DNA, who didnt do the murder â€“Incredibly high

153
153 Back to the Questions Haller and Kraus (2002) â€“Asked those questions of groups in Germany â€“Psychology Students â€“Psychology lecturers and professors (who didnt teach stats) â€“Psychology lecturers and professors (who did teach stats)

154
154 1.You have absolutely disproved the null hypothesis (that is, there is no difference between the population means). True 34% of students 15% of professors/lecturers, 10% of professors/lecturers teaching statistics False We have found evidence against the null hypothesis

155
155 2.You have found the probability of the null hypothesis being true. â€“32% of students â€“26% of professors/lecturers â€“17% of professors/lecturers teaching statistics False We dont know

156
You have absolutely proved your experimental hypothesis (that there is a difference between the population means). â€“20% of students â€“13% of professors/lecturers â€“10% of professors/lecturers teaching statistics False

157
157 4.You can deduce the probability of the experimental hypothesis being true. â€“59% of students â€“33% of professors/lecturers â€“33% of professors/lecturers teaching statistics False

158
158 5.You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision. 68% of students 67% of professors/lecturers 73% of professors professors/lecturers teaching statistics False Can be worked out â€“P(replication)

159
159 6.You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of times, you would obtain a significant result on 99% of occasions. â€“41% of students â€“49% of professors/lecturers â€“37% of professors professors/lecturers teaching statistics False Another tricky one â€“It can be worked out

160
160 One Last Quiz I carry out a study â€“All assumptions perfectly satisfied â€“Random sample from population â€“I find p = 0.05 You replicate the study exactly â€“What is probability you find p < 0.05?

161
161 I carry out a study â€“All assumptions perfectly satisfied â€“Random sample from population â€“I find p = 0.01 You replicate the study exactly â€“What is probability you find p < 0.05?

162
162 Significance testing creates boundaries and gaps where none exist. Significance testing means that we find it hard to build upon knowledge â€“we dont get an accumulation of knowledge

163
163 Yates (1951) "the emphasis given to formal tests of significance... has resulted in... an undue concentration of effort by mathematical statisticians on investigations of tests of significance applicable to problems which are of little or no practical importance... and... it has caused scientific research workers to pay undue attention to the results of the tests of significance... and too little to the estimates of the magnitude of the effects they are investigating

164
164 Testing the Slope Same idea as with the mean â€“Estimate 95% CI of slope â€“Estimate significance of difference from a value (usually 0) Need to know the SD of the slope â€“Similar to SD of the mean

165
165

166
166 Similar to equation for SD of mean Then we need standard error -Similar (ish) When we have standard error â€“Can go on to 95% CI â€“Significance of difference

167
167

168
168 Confidence Limits 95% CI â€“ t dist with N - k - 1 df is 2.31 â€“CI = = % confidence limits

169
169 Significance of difference from zero â€“ i.e. probability of getting result if =0 Not probability that = 0 This probability is (of course) the same as the value for the prediction

170
170 Testing the Standardised Slope (Correlation) Correlation is bounded between â€“1 and +1 â€“Does not have symmetrical distribution, except around 0 Need to transform it â€“Fisher z transformation â€“ approximately normal

171
171 95% CIs â€“0.879 â€“ 1.96 * 0.38 = 0.13 â€“ * 0.38 = 1.62

172
172 Transform back to correlation 95% CIs = 0.13 to 0.92 Very wide â€“Because of small sample size â€“Maybe thats why CIs are not reported?

173
173 Using Excel Functions in excel â€“Fisher() â€“ to carry out Fisher transformation â€“Fisherinv() â€“ to transform back to correlation

174
174 The Others Same ideas for calculation of CIs and SEs for â€“Predicted score â€“Gives expected range of values given X Same for intercept â€“But we have probably had enough

175
One more tricky thing (Dont worry if you dont understand) For means, regression estimates, etc â€“Estimate â€“95% confidence intervals , â€“P = They match 175

176
For correlations, odds ratios, etc â€“No longer match 95% CIs â€“0.0000, P-value â€“ Because of the sampling distribution of the mean â€“Does not depend on the value The sampling distribution of a proportion â€“Does depend on the value â€“More certainty around 0.9 than around

177
177 Lesson 4: Introducing Multiple Regression

178
178 Residuals We said Y = b 0 + b 1 x 1 We could have said Y i = b 0 + b 1 x i1 + e i We ignored the i on the Y And we ignored the e i â€“Its called error, after all But it isnt just error â€“Trying to tell us something

179
179 What Error Tells Us Error tells us that a case has a different score for Y than we predict â€“There is something about that case Called the residual â€“What is left over, after the model Contains information â€“Something is making the residual 0 â€“But what?

180

181
181

182
182 The residual (+ the mean) is the expected value of Y If all cases were equal on X It is the value of Y, controlling for X Other words: â€“Holding constant â€“Partialling â€“Residualising (residualised scores) â€“Conditioned on

183
183 Sometimes adjustment is enough on its own â€“Measure performance against criteria Teenage pregnancy rate â€“Measure pregnancy and abortion rate in areas â€“Control for socio-economic deprivation, religion, rural/urban and anything else important â€“See which areas have lower teenage pregnancy and abortion rate, given same level of deprivation Value added education tables â€“Measure school performance â€“Control for initial intake

184
Sqm PricePredictedResidual Adj Value (mean + resid)

185
185 Control? In experimental research â€“Use experimental control â€“e.g. same conditions, materials, time of day, accurate measures, random assignment to conditions In non-experimental research â€“Cant use experimental control â€“Use statistical control instead

186
186 Analysis of Residuals What predicts differences in crime rate â€“After controlling for socio-economic deprivation â€“Number of police? â€“Crime prevention schemes? â€“Rural/Urban proportions? â€“Something else This is (mostly) what multiple regression is about

187
187 Exam performance â€“Consider number of books a student read (books) â€“Number of lectures (max 20) a student attended (attend) Books and attend as IV, grade as outcome

188
188 First 10 cases

189
189 Use books as IV â€“R =0.492, F =12.1, df =1, 28, p =0.001 â€“b 0 =52.1, b 1 =5.7 â€“(Intercept makes sense) Use attend as IV â€“R =0.482, F =11.5, df =1, 38, p =0.002 â€“b 0 =37.0, b 1 =1.9 â€“(Intercept makes less sense)

190
190 Books Grade (100)

191
191

192
192 Problem Use R 2 to give proportion of shared variance â€“Books = 24% â€“Attend = 23% So we have explained 24% + 23% = 47% of the variance â€“NO!!!!!

193
193 Correlation of books and attend is (unsurprisingly) not zero â€“Some of the variance that books shares with grade, is also shared by attend Look at the correlation matrix BOOKS ATTEND GRADE BOOKSATTENDGRADE

194
194 I have access to 2 cars My wife has access to 2 cars â€“We have access to four cars? â€“No. We need to know how many of my 2 cars are shared Similarly with regression â€“But we can do this with the residuals â€“Residuals are what is left after (say) books â€“See if residual variance is explained by attend â€“Can use this new residual variance to calculate SS res, SS total and SS reg

195
195 Well. Almost. â€“This would give us correct values for SS â€“Would not be correct for slopes, etc Because assumes that the variables have a causal priority â€“Why should attend have to take what is left from books? â€“Why should books have to take what is left by attend? Use OLS again; take variance they share

196
196 Simultaneously estimate 2 parameters â€“b 1 and b 2 â€“Y = b 0 + b 1 x 1 + b 2 x 2 â€“x 1 and x 2 are IVs Shared variance Not trying to fit a line any more â€“Trying to fit a plane Can solve iteratively â€“Closed form equations better â€“But they are unwieldy

197
197 x1x1 x2x2 y 3D scatterplot (2points only)

198
198 x1x1 x2x2 y b0b0 b1b1 b2b2

199
199

200
Increasing Power What if the predictors dont correlate? Regression is still good â€“It increases the power to detect effects â€“(More on power later) Less variance left over When do we know the two predictors dont correlate? 200

201
201 (Really) Ridiculous Equations

202
202 The good news â€“There is an easier way The bad news â€“It involves matrix algebra The good news â€“We dont really need to know how to do it

203
Were not programming computers â€“So we usually dont care Very, very occasionally it helps to know what the computer is doing 203

204
204 Back to the Good News We can calculate the standardised parameters as B=R xx -1 x R xy Where â€“B is the vector of regression weights â€“R xx -1 is the inverse of the correlation matrix of the independent (x) variables â€“R xy is the vector of correlations of the correlations of the x and y variables

205
Exercise

206
Exercises Exercise 4.1 â€“Grades data in Excel Exercise 4.2 â€“Repeat in Stata Exercise 4.3 â€“Zero correlation Exercise 4.4 â€“Repeat therapy data Exercise 4.5 â€“PTSD in families. 206

207
207 Lesson 5: More on Multiple Regression

208
Contents More on parameter estimates â€“Standard errors of coefficients R, R2, adjusted R2 Extra bits â€“Suppressors â€“Decisions about control variables â€“Standardized estimates > 1 â€“Variable entry techniques 208

209
More on Parameter Estimates 209

210
210 Parameter Estimates Parameter estimates ( b 1, b 2 â€¦ b k ) were standardised â€“Because we analysed a correlation matrix Represent the correlation of each IV with the outcome â€“When all other IVs are held constant

211
211 Can also be unstandardised Unstandardised represent the unit (rather than SDs) change in the outcome associated with a 1 unit change in the IV â€“When all the other variables are held constant Parameters have standard errors associated with them â€“As with one IV â€“Hence t-test, and associated probability can be calculated Trickier than with one IV

212
212 Standard Error of Regression Coefficient Standardised is easier â€“R 2 i is the value of R 2 when all other predictors are used as predictors of that variable Note that if R 2 i = 0, the equation is the same as for previous

213
Multiple R 213

214
214 Multiple R The degree of prediction â€“R (or Multiple R ) â€“No longer equal to b R 2 Might be equal to the sum of squares of B â€“Only if all x s are uncorrelated

215
215 In Terms of Variance Can also think of R2 in terms of variance explained. â€“Each IV explains some variance in the outcome â€“The IVs share some of their variance Cant share the same variance twice

216
216 The total variance of Y = 1 Variance in Y accounted for by x 1 r x 1 y 2 = 0.36 Variance in Y accounted for by x 2 r x2y 2 = 0.36

217
217 In this model â€“R 2 = r yx r yx 2 2 â€“R 2 = = 0.72 â€“R = 0.72 = 0.85 But â€“If x 1 and x 2 are correlated â€“No longer the case

218
218 The total variance of Y = 1 Variance in Y accounted for by x 2 r x2y 2 = 0.36 Variance in Y accounted for by x 1 r x1y 2 = 0.36 Variance shared between x 1 and x 2 (not equal to r x1x2 )

219
219 So â€“We can no longer sum the r 2 â€“Need to sum them, and subtract the shared variance â€“ i.e. the correlation But â€“Its not the correlation between them â€“Its the correlation between them as a proportion of the variance of Y Two different ways

220
220 If r x1x2 = 0 â€“r xy = b x 1 â€“Equivalent to r yx1 2 + r yx 2 2 Based on estimates

221
221 r x1x2 = 0 â€“Equivalent to r yx1 2 + r yx2 2 Based on correlations

222
222 Can also be calculated using methods we have seen â€“Based on PRE (predicted value) â€“Based on correlation with prediction Same procedure with >2 IVs

223
223 Adjusted R 2 R 2 is on average an overestimate of population value of R 2 â€“Any x will not correlate 0 with Y â€“Any variation away from 0 increases R â€“Variation from 0 more pronounced with lower N Need to correct R 2 â€“Adjusted R 2

224
224 1 â€“ R 2 â€“Proportion of unexplained variance â€“We multiple this by an adjustment More variables â€“ greater adjustment More people â€“ less adjustment Calculation of Adj. R 2

225
225

226
226 Extra Bits Some stranger things that can happen â€“ Counter-intuitive

227
227 Can be hard to understand â€“Very counter-intuitive Definition â€“A predictor which increases the size of the parameters associated with other predictors above the size of their correlations Suppressor variables

228
228 An example (based on Horst, 1941) â€“Success of trainee pilots â€“Mechanical ability ( x 1 ), verbal ability ( x 2 ), success ( y ) Correlation matrix

229
229 â€“Mechanical ability correlates 0.3 with success â€“Verbal ability correlates 0.0 with success â€“What will the parameter estimates be? â€“(Dont look ahead until you have had a guess)

230
230 Mechanical ability â€“ b = 0.4 â€“Larger than r ! Verbal ability â€“ b = -0.2 â€“Smaller than r !! So what is happening? â€“You need verbal ability to do the mechanical ability test â€“Not actually related to mechanical ability Measure of mechanical ability is contaminated by verbal ability

231
231 High mech, low verbal â€“High mech This is positive (.4) â€“Low verbal Negative, because we are talking about standardised scores (-(-.2) (.2) Your mech is really high â€“ you did well on the mechanical test, without being good at the words High mech, high verbal â€“Well, you had a head start on mech, because of verbal, and need to be brought down a bit

232
232 Another suppressor? b1 = b2 = b1 = b2 =

233
233 Another suppressor? b 1 =0.26 b 2 = -0.06

234
234 And another? b 1 = b 2 =

235
235 And another? b 1 = 0.53 b 2 = -0.47

236
236 One more? b 1 = b 2 =

237
237 One more? b 1 = 0.53 b 2 = 0.47

238
238 Suppression happens when two opposing forces are happening together â€“And have opposite effects Dont throw away your IVs, â€“Just because they are uncorrelated with the outcome Be careful in interpretation of regression estimates â€“Really need the correlations too, to interpret what is going on â€“Cannot compare between studies with different predictors â€“Think about what you want to know Before throwing variables into the analysis

239
What to Control For? What is the added value of a better college â€“In terms of salary â€“More academic people go to better colleges â€“Control for: Ability? Social class? Mothers education? Parents income? Course? Ethnic group? â€¦ 239

240
Decisions about control variables â€“Guided from theory Effect of gender â€“Controlling for hair length and skirt wearing? 240

241
241

242
Do dogs make kids healthier? â€“What to control for? Parents weight? Yes: Obese parents are more likely to have obese kids, kids who are thinner, relative to the parents are thinner. No: Dog might make parent thinner. By controlling for parental weight, youre controlling for the effect of dog 242

243

244
Dog Kids health Good control vars Bad control vars

245
Dog Kids health Income Parent Weight Child Asthma Rural/Urban? House/apartment?

246
246 Standardised Estimates > 1 Correlations are bounded r â€“We think of standardised regression estimates as being similarly bounded But they are not â€“Can go >1.00, <-1.00 â€“R cannot, because that is a proportion of variance

247
247 Three measures of ability â€“Mechanical ability, verbal ability 1, verbal ability 2 â€“Score on science exam â€“Before reading on, what are the parameter estimates?

248
248 Mechanical â€“About where we expect Verbal 1 â€“Very high Verbal 2 â€“Very low

249
249 What is going on â€“Its a suppressor again â€“a predictor which increases the size of the parameters associated with other predictors above the size of their correlations Verbal 1 and verbal 2 are correlated so highly â€“They need to cancel each other out

250
250 Variable Selection What are the appropriate predictors to use in a model? â€“Depends what you are trying to do Multiple regression has two separate uses â€“Prediction â€“Explanation

251
251 Prediction â€“What will happen in the future? â€“Emphasis on practical application â€“Variables selected (more) empirically â€“Value free Explanation â€“Why did something happen? â€“Emphasis on understanding phenomena â€“Variables selected theoretically â€“Not value free

252
252 Visiting the doctor â€“Precedes suicide attempts â€“Predicts suicide Does not explain suicide More on causality later on â€¦ Which are appropriate variables â€“To collect data on? â€“To include in analysis? â€“Decision needs to be based on theoretical knowledge of the behaviour of those variables â€“Statistical analysis of those variables (later) Unless you didnt collect the data â€“Common sense (not a useful thing to say)

253
253 Variable Entry Techniques Entry-wise â€“All variables entered simultaneously Hierarchical â€“Variables entered in a predetermined order Stepwise â€“Variables entered according to change in R 2 â€“Actually a family of techniques

254
254 Entrywise regression â€“All variables entered simultaneously â€“All treated equally Hierarchical regression â€“Entered in a theoretically determined order â€“Change in R 2 is assessed, and tested for significance â€“e.g. sex and age Should not be treated equally with other variables Sex and age MUST be first (unchangeable) â€“Confused with hierarchical linear modelling (MLM)

255
R-Squared Change 255 SSE 0, df 0 SSE and df for first (smaller) model SSE 1, df 1 SSE and df for second (larger) model

256
256 Stepwise â€“Variables entered empirically â€“Variable which increases R 2 the most goes first Then the next â€¦ â€“Variables which have no effect can be removed from the equation Example â€“House prices â€“ whats important? â€“Size, lot size, list price,

257
257 Stepwise Analysis â€“Data determines the order â€“Model 1: listing price, R 2 = 0.87 â€“Model 2: listing price + lot size, R 2 = 0.89

258
258 Hierarchical analysis â€“Theory determines the order â€“Model 1: Lot size+ House size, R 2 = â€“Model 2: + List price, R 2 = â€“Change in R 2 = 0.41, p < 0.001

259
259 Which is the best model? â€“Entrywise â€“ OK â€“Stepwise â€“ excluded age Excluded size â€“MOST IMPORTANT PREDICTOR â€“Hierarchical Listing price accounted for additional variance â€“Whoever decides the price has information that we dont Other problems with stepwise â€“F and df are wrong (cheats with df ) â€“Unstable results Small changes (sampling variance) â€“ large differences in models

260
260 â€“Uses a lot of paper â€“Dont use a stepwise procedure to pack your suitcase

261
261 Is Stepwise Always Evil? Yes All right, no Research goal is entirely predictive (technological) â€“Not explanatory (scientific) â€“What happens, not why N is large â€“40 people per predictor, Cohen, Cohen, Aiken, West (2003) Cross validation takes place

262
Alternatives to stepwise regression â€“More recently developed â€“Used for genetic studies 1000s of predictors, one outcome, small samples â€“Least Angle Regression LARS (least angle regression) Lasso (Least absolute shrinkage and selection operator) 262

263
Entry Methods in Stata Entrywise â€“What regress does Hierarchical â€“Two ways â€“Use hireg â€“Add on module net search hireg Then install 263

264
Hierarchical Regression Use (on one line) â€“hireg outcome (block1var1 block1var2) (block2var1 block2var2) Hireg reports â€“Parameter estimates for the two regressions â€“R 2 for each model, change in R 2 264

265
265 Model R2 F(df) 1: (1,98) 2: (2,97) p R2 change F(df)change p (1,97) P value for the R 2 P value for the change in R 2

266
Hierarchical Regression (Contâ€¦) I dont like hireg, for two reasons â€“Its different to regression â€“It only works for OLS regression, not logistic, multinomial, Poisson, etc Alternative 2: â€“Use test â€“The p-value associated with the change in R 2 for a variable Equal to the p-value for that variable. 266

267
Hierarchical Regression (Contâ€¦) Example (using cars) â€“Parameters from final model: â€“hireg price () (extro) car | Coef. Std. Err. t P>|t| [95% Conf. Interval] extro | â€“R2 change statistics R2 change F(df) change p (1,36) â€“(What is relationship between t and F?) We know the p-value of the R 2 change â€“When there is one predictor in the block â€“What about when theres more than one? 267

268
Hierarchical Regression (Cont) test isnt exactly what we want â€“But it is the same as what we want Advantage of test â€“You can always use it (I can always remember how it works) 268

269
(For SPSS) SPSS calls them blocks Enter some variables, click next block â€“Enter more variables Click on Statistics â€“Click on R-squared change 269

270
Stepwise Regression Add stepwise: prefix With â€“Pr() â€“ probability value to be removed from equation â€“Pe() â€“ probability value to be entered into equation stepwise, pe(0.05) pr(0.2): reg price sqm lotsize originallis 270

271
271 A quick note on R 2 R 2 is sometimes regarded as the fit of a regression model â€“Bad idea If good fit is required â€“ maximise R 2 â€“Leads to entering variables which do not make theoretical sense

272
Propensity Scores Another method of controlling for variables Ensure that predictors are uncorrelated with one predictor â€“Dont need to control for them 272

273
x s Uncorrelated? Two cases when x s are uncorrelated Experimental design â€“Predictors are uncorrelated â€“We randomly assigned people to conditions to ensure that was the case Sample weights â€“We can deliberately sample Ensure that they are uncorrelated 273

274
20 women with college degree 20 women without college degree 20 men with college degree 20 men without college degree â€“Or use post hoc sample weights Propensity weighting â€“Weight to ensure that variables are uncorrelated â€“Usually done to avoid having to control â€“E.g. ethnic differences in PTSD symptoms â€“Can incorporate many more control variables

275
Propensity Scores Race profiling of police stops â€“Same time, place, area, etc â€“www.youtube.com/watch?v=Oot0BOaQTZI 275

276
276 Critique of Multiple Regression Goertzel (2002) â€“Myths of murder and multiple regression â€“Skeptical Inquirer (Paper B1) Econometrics and regression are junk science â€“Multiple regression models (in US) â€“Used to guide social policy

277
277 More Guns, Less Crime â€“(controlling for other factors) Lott and Mustard: A 1% increase in gun ownership â€“3.3% decrease in murder rates But: â€“More guns in rural Southern US â€“More crime in urban North (crack cocaine epidemic at time of data)

278
278 Executions Cut Crime No difference between crimes in states in US with or without death penalty Ehrlich (1975) controlled all variables that affect crime rates â€“Death penalty had effect in reducing crime rate No statistical way to decide whos right

279
279 Legalised Abortion Donohue and Levitt (1999) â€“Legalised abortion in 1970s cut crime in 1990s Lott and Whitley (2001) â€“Legalising abortion decreased murder rates by â€¦ 0.5 to 7 per cent. Its impossible to model these data â€“Controlling for other historical events â€“Crack cocaine (again)

280
Crime is still dropping in the US â€“Despite the recession Levitt says its mysterious, because the abortion effect should be over Some suggest Xboxes, Playstations, etc Netflix, DVRs â€“(Violent movies reduce crime). 280

281
281 Another Critique Berk (2003) â€“Regression analysis: a constructive critique (Sage) Three cheers for regression â€“As a descriptive technique Two cheers for regression â€“As an inferential technique One cheer for regression â€“As a causal analysis

282
282 Is Regression Useless? Do regression carefully â€“Dont go beyond data which you have a strong theoretical understanding of Validate models â€“Where possible, validate predictive power of models in other areas, times, groups Particularly important with stepwise

283
283 Lesson 6: Categorical Predictors

284
284 Introduction

285
285 Introduction So far, just looked at continuous predictors Also possible to use categorical (nominal, qualitative) predictors â€“e.g. Sex; Job; Religion; Region; Type (of anything) Usually analysed with t-test/ANOVA

286
286 Historical Note But these (t-test/ANOVA) are special cases of regression analysis â€“Aspects of General Linear Models (GLMs) So why treat them differently? â€“Fishers fault â€“Computers fault Regression, as we have seen, is computationally difficult â€“Matrix inversion and multiplication â€“Cant do it, without a computer

287
287 In the special cases where: You have one categorical predictor Your IVs are uncorrelated â€“It is much easier to do it by partitioning of sums of squares These cases â€“Very rare in applied research â€“Very common in experimental research Fisher worked at Rothamsted agricultural research station Never have problems manipulating wheat, pigs, cabbages, etc

288
288 In psychology â€“Led to a split between experimental psychologists and correlational psychologists â€“Experimental psychologists (until recently) would not think in terms of continuous variables Still (too) common to dichotomise a variable â€“Too difficult to analyse it properly â€“Equivalent to discarding 1/3 of your data

289
289 The Approach

290
290 The Approach Recode the nominal variable â€“Into one, or more, variables to represent that variable Names are slightly confusing â€“Some texts talk of dummy coding to refer to all of these techniques â€“Some (most) refer to dummy coding to refer to one of them â€“Most have more than one name

291
291 If a variable has g possible categories it is represented by g -1 variables Simplest case: â€“Smokes: Yes or No â€“Variable 1 represents Yes â€“Variable 2 is redundant If it isnt yes, its no

292
292 The Techniques

293
293 We will examine two coding schemes â€“Dummy coding For two groups For >2 groups â€“Effect coding For >2 groups Look at analysis of change â€“Equivalent to ANCOVA â€“Pretest-posttest designs

294
294 Dummy Coding â€“ 2 Groups Sometimes called simple coding A categorical variable with two groups One group chosen as a reference group â€“The other group is represented in a variable e.g. 2 groups: Experimental (Group 1) and Control (Group 0) â€“Control is the reference group â€“Dummy variable represents experimental group Call this variable group1

295
295 For variable group1 â€“1 = Yes, 0=No

296
296 Some data Group is x, score is y

297
297 Control Group = 0 â€“Intercept = Score on Y when x = 0 â€“Intercept = mean of control group Experimental Group = 1 â€“ b = change in Y when x increases 1 unit â€“ b = difference between experimental group and control group

298
298 Gradient of slope represents difference between means

299
299 Dummy Coding â€“ 3+ Groups With three groups the approach is the similar g = 3, therefore g -1 = 2 variables needed 3 Groups â€“Control â€“Experimental Group 1 â€“Experimental Group 2

300
300 Recoded into two variables â€“Note â€“ do not need a 3 rd variable If we are not in group 1 or group 2 MUST be in control group 3 rd variable would add no information (What would happen to determinant?)

301
301 F and associated p â€“Tests H 0 that b 1 and b 2 and associated p-values â€“Test difference between each experimental group and the reference group To test difference between experimental groups â€“Need to rerun analysis (or just do ANOVA with post-hoc tests)

302
302 One more complication â€“Have now run multiple comparisons â€“Increases â€“ i.e. probability of type I error Need to correct for this â€“Bonferroni correction â€“Multiply given p -values by two/three (depending how many comparisons were made)

303
303 Effect Coding Usually used for 3+ groups Compares each group (except the reference group) to the mean of all groups â€“Dummy coding compares each group to the reference group. Example with 5 groups â€“1 group selected as reference group Group 5

304
304 Each group (except reference) has a variable â€“ 1 if the individual is in that group â€“ 0 if not â€“-1 if in reference group

305
305 Examples Dummy coding and Effect Coding Group 1 chosen as reference group each time Data

306
306 Dummy Groupdummy2dummy GroupEffect2effect Effect

307
307 Dummy R =0.543, F =5.7, df=2, 27, p =0.009 b 0 = 52.4, b 1 = 3.9, p =0.100 b 2 = 7.7, p =0.002 Effect R =0.543, F =5.7, df =2, 27, p =0.009 b 0 = 56.27, b 1 = 0.03, p=0.980 b 2 = 3.8, p=0.007

308
308 In Stata Use xi: prefix for dummy coding Use xi3: module for more codings But â€“I dont like it, I do it by hand â€“I dont understand what its doing â€“It makes very long variables And then I cant use test â€“BUT: If doing stepwise, you need to keep the variables together Example: xi: reg outcome contpred i.catpred Put i. in front of categorical predictors This has changed in Stata 11. xi: no longer needed

309
xi: reg salary i.job_description salary | Coef. Std. Err. t P>|t| _Ijob_desc~2 | _Ijob_desc~3 | _cons |

310
Exercise golf balls â€“Which is best? 310

311
311 In SPSS SPSS provides two equivalent procedures for regression â€“Regression â€“GLM â€“GLM will: â€“Automatically code categorical variables â€“Automatically calculate interaction terms â€“Allow you to not understand GLM wont: â€“Give standardised effects â€“Give hierarchical R 2 p-values

312
312 ANCOVA and Regression

313
313 Test â€“(Which is a trick; but its designed to make you think about it) Use bank data (Ex 5.3) â€“Compare the pay rise (difference between salbegin and salary) â€“For ethnic minority and non-minority staff What do you find?

314
314 ANCOVA and Regression Dummy coding approach has one special use â€“In ANCOVA, for the analysis of change Pre-test post-test experimental design â€“Control group and (one or more) experimental groups â€“Tempting to use difference score + t-test / mixed design ANOVA â€“Inappropriate

315
315 Salivary cortisol levels â€“Used as a measure of stress â€“Not absolute level, but change in level over day may be interesting Test at: 9.00am, 9.00pm Two groups â€“High stress group (cancer biopsy) Group 1 â€“Low stress group (no biopsy) Group 0

316
316 Correlation of AM and PM = ( p =0.008) Has there been a significant difference in the rate of change of salivary cortisol? â€“3 different approaches

317
317 Approach 1 â€“ find the differences, do a t-test â€“ t = 1.31, df =26, p =0.203 Approach 2 â€“ mixed ANOVA, look for interaction effect â€“F = 1.71, df = 1, 26, p = â€“F = t 2 Approach 3 â€“ regression (ANCOVA) based approach

318
318 â€“IVs: AM and group â€“outcome: PM â€“ b 1 (group) = 3.59, standardised b 1 =0.432, p = 0.01 Why is the regression approach better? â€“The other two approaches took the difference â€“Assumes that r = 1.00 â€“Any difference from r = 1.00 and you add error variance Subtracting error is the same as adding error

319
319 Using regression â€“Ensures that all the variance that is subtracted is true â€“Reduces the error variance Two effects â€“Adjusts the means Compensates for differences between groups â€“Removes error variance Data is am-pm cortisol

320
320 More on Change If difference score is correlated with either pre-test or post-test â€“Subtraction fails to remove the difference between the scores â€“If two scores are uncorrelated Difference will be correlated with both Failure to control â€“Equal SDs, r = 0 Correlation of change and pre-score =0.707

321
321 Even More on Change A topic of surprising complexity â€“What I said about difference scores isnt always true Lords paradox â€“ it depends on the precise question you want to answer â€“Collins and Horn (1993). Best methods for the analysis of change â€“Collins and Sayer (2001). New methods for the analysis of change â€“More later

322
322 Lesson 7: Assumptions in Regression Analysis

323
323 The Assumptions 1.The distribution of residuals is normal (at each value of the outcome). 2.The variance of the residuals for every set of values for the predictor is equal. violation is called heteroscedasticity. 3.The error term is additive no interactions. 4.At every value of the outcome the expected (mean) value of the residuals is zero No non-linear relationships

324
324 5.The expected correlation between residuals, for any two cases, is 0. The independence assumption (lack of autocorrelation) 6.All predictors are uncorrelated with the error term. 7.No predictors are a perfect linear function of other predictors (no perfect multicollinearity) 8.The mean of the error term is zero.

325
325 What are we going to do â€¦ Deal with some of these assumptions in some detail Deal with others in passing only â€“look at them again later on

326
326 Assumption 1: The Distribution of Residuals is Normal at Every Value of the outcome

327
327 Look at Normal Distributions A normal distribution â€“symmetrical, bell-shaped (so they say)

328
328 What can go wrong? Skew â€“non-symmetricality â€“one tail longer than the other Kurtosis â€“too flat or too peaked â€“kurtosed Outliers â€“Individual cases which are far from the distribution

329
329 Effects on the Mean Skew â€“biases the mean, in direction of skew Kurtosis â€“mean not biased â€“standard deviation is â€“and hence standard errors, and significance tests

330
330 Examining Univariate Distributions Graphs â€“Histograms â€“Boxplots â€“P-P plots Calculation based methods

331
331 Histograms A and B

332
332 C and D

333
333 E & F

334
334 Histograms can be tricky â€¦.

335
335 Boxplots

336
336 P-P Plots A & B

337
337 C & D

338
338 E & F

339
339 Skew and Kurtosis statistics Outlier detection statistics Calculation Based

340
340 Skew and Kurtosis Statistics Normal distribution â€“skew = 0 â€“kurtosis = 0 Two methods for calculation â€“Fishers and Pearsons â€“Very similar answers Associated standard error â€“can be used for significance (t-test) of departure from normality â€“not actually very useful Never normal above N = 400

341
341

342
342 Outlier Detection Calculate distance from mean â€“z-score (number of standard deviations) â€“deleted z-score that case biased the mean, so remove it â€“Look up expected distance from mean 1% 3+ SDs

343
343 Non-Normality in Regression

344
344 Effects on OLS Estimates The mean is an OLS estimate The regression line is an OLS estimate Lack of normality â€“biases the position of the regression slope â€“makes the standard errors wrong probability values attached to statistical significance wrong

345
345 Checks on Normality Check residuals are normally distributed â€“Draw histogram residuals Use regression diagnostics â€“Lots of them â€“Most arent very interesting

346
346 Regression Diagnostics Residuals â€“Standardised, studentised-deleted â€“look for cases > |3| (?) Influence statistics â€“Look for the effect a case has â€“If we remove that case, do we get a different answer? â€“DFBeta, Standardised DFBeta changes in b

347
347 â€“DfFit, Standardised DfFit change in predicted value Distances â€“measures of distance from the centroid â€“some include IV, some dont

348
348 More on Residuals Residuals are trickier than you might have imagined Raw residuals â€“OK Standardised residuals â€“Residuals divided by SD

349
349 Standardised / Studentised Now we can calculate the standardised residuals â€“SPSS calls them studentised residuals â€“Also called internally studentised residuals

350
350 Deleted Studentised Residuals Studentised residuals do not have a known distribution â€“Cannot use them for inference Deleted studentised residuals â€“Externally studentised residuals â€“Studentized (jackknifed) residuals Distributed as t With df = N â€“ k â€“ 1

351
351 Testing Significance We can calculate the probability of a residual â€“Is it sampled from the same population BUT â€“Massive type I error rate â€“Bonferroni correct it Multiply p value by N

352
352 Bivariate Normality We didnt just say residuals normally distributed We said at every value of the outcomes Two variables can be normally distributed â€“ univariate, â€“but not bivariate

353
353 Couples IQs â€“male and female â€“Seem reasonably normal

354
354 But wait!!

355
355 When we look at bivariate normality â€“not normal â€“ there is an outlier So plot X against Y OK for bivariate â€“but â€“ may be a multivariate outlier â€“Need to draw graph in 3+ dimensions â€“cant draw a graph in 3 dimensions But we can look at the residuals instead â€¦

356
356 IQ histogram of residuals

357
357 Multivariate Outliers â€¦ Will be explored later in the exercises So we move on â€¦

358
358 What to do about Non- Normality Skew and Kurtosis â€“Skew â€“ much easier to deal with â€“Kurtosis â€“ less serious anyway Transform data â€“removes skew â€“positive skew â€“ log transform â€“negative skew - square

359
359 Transformation May need to transform IV and/or outcome â€“More often outcome time, income, symptoms (e.g. depression) all positively skewed â€“can cause non-linear effects (more later) if only one is transformed â€“alters interpretation of unstandardised parameter â€“May alter meaning of variable Some people say that this is such a big problem â€“Never transform â€“May add / remove non-linear and moderator effects

360
360 Change measures â€“increase sensitivity at ranges avoiding floor and ceiling effects Outliers â€“Can be tricky â€“Why did the outlier occur? Error? Delete them. Weird person? Probably delete them Normal person? Tricky.

361
361 â€“You are trying to model a process is the data point outside the process e.g. lottery winners, when looking at salary yawn, when looking at reaction time â€“Which is better? A good model, which explains 99% of your data? (because we threw outliers out) A poor model, which explains all of it (because we keep outliers in) I prefer a good model

362
More on House Prices Zillow.com tracks and predicts house prices â€“In the USA Sometimes detects outliers â€“We dont trust this selling price â€“We havent used it 362

363
Example in Stata reg salary educ predict res, res hist res gen logsalary= log(salary) reg logsalary educ predict logres, res hist logres 363

364

365

366
But â€¦ Parameter estimates change Interpretation of parameter estimate is different Exercise 7.0,

367
Bootstrapping Bootstrapping is very, very cool And very, very clever But very, very simple 367

368
Bootstrapping When we estimate a test statistic (F or r or t or 2 ) We rely on knowing the sampling distribution Which we know â€“If the distributional assumptions are satisfied 368

369
Estimate the Distribution Bootstrapping lets you: â€“Skip the bit about distribution â€“Estimate the sampling distribution from the data This shouldnt be allowed â€“Hence bootstrapping â€“But it is 369

370
How to Bootstrap We resample, with replacement Take our sample â€“Sample 1 individual Put that individual back, so that they can be sampled again â€“Sample another individual Keep going until weve sampled as many people as were in the sample Analyze the data Repeat the process B times â€“Where B is a big number 370

371
Example 371 Original B B B

372
Analyze each dataset â€“Sampling distribution of statistic Gives sampling distribution 2 approaches to CI or P Semi-parametric â€“Calculate standard error of statistic â€“Call that the standard deviation â€“Does not make assumption about distribution of data Makes assumption about sampling distribution 372

373
Non-parametric â€“Stata calls this percentile Count. â€“If you have 1000 samples â€“25 th is lower CI â€“975 th is upper CI â€“P-value is proportion that cross zero Non-parametric needs more samples 373

374
Bootstrapping in Stata Very easy: â€“Use bootstrap: (or bs: or bstrap: ) prefix or â€“(Better) use vce(bootstrap) option By default does 50 samples â€“Not enough â€“Use reps() â€“At least

375
Example reg salary salbegin educ, vce(bootstrap, reps(50)) | Observed Bootstrap | Coef. Std. Err. z salbegin | Again salbegin |

376
More Reps 1,000 reps â€“Z = Again â€“Z = ,000 reps â€“17.23 â€“

377
377 Exercise 7.2, 7.3

378
378 Assumption 2: The variance of the residuals for every set of values for the predictor is equal.

379
379 Heteroscedasticity This assumption is a about heteroscedasticity of the residuals â€“Hetero=different â€“Scedastic = scattered We dont want heteroscedasticity â€“we want our data to be homoscedastic Draw a scatterplot to investigate

380
380

381
381 Only works with one IV â€“need every combination of IVs Easy to get â€“ use predicted values â€“use residuals there Plot predicted values against residuals A bit like turning the scatterplot to make the line of best fit flat

382
382 Good â€“ no heteroscedasticity

383
383 Bad â€“ heteroscedasticity

384
384 Testing Heteroscedasticity Whites test 1.Do regression, save residuals. 2.Square residuals 3.Square IVs 4.Calculate interactions of IVs â€“e.g. x 1x 2, x 1x 3, x 2 x 3

385
385 5.Run regression using â€“squared residuals as outcome â€“IVs, squared IVs, and interactions as IVs 6.Test statistic = N x R 2 â€“Distributed as 2 â€“Df = k (for second regression) Use education and salbegin to predict salary (employee data.sav) â€“R 2 = 0.113, N=474, 2 = 53.5, df=5, p < Automatic in Stata â€“estat imtest, white

386
386 Plot of Predicted and Residual

387
Whites Test as Test of Interest Possible to have a theory that predicts heteroscedasticity Lupien, et al, 2006 â€“Heteroscedasticity in relationship of hippocampal volume and age 387

388
388 Magnitude of Heteroscedasticity Chop data into 5 slices â€“Calculate variance of each slice â€“Check ratio of smallest to largest â€“Less than 5 OK

389
gen slice = 1 replace slice = 2 if pred > replace slice = 3 if pred > replace slice = 4 if pred > replace slice = 5 if pred > bysort slice: su pred 1: : (Doesnt look too bad, thanks to skew in predictors) 389

390
390 Dealing with Heteroscedasticity Use Huber-White (robust) estimates â€“Also called sandwich estimates â€“Also called empirical estimates Use survey techniques â€“Relatively straightforward in SAS and Stata, fiddly in SPSS â€“Google: SPSS Huber-White

391
SE can be calculated with: Sandwich estimator: Whys it a Sandwich? 391

392
Example reg salary educ â€“Standard errors: â€“204, 2821 reg salary educ, robust â€“Standard errors: â€“267 â€“3347 SEs usually go up, can go down 392

393
393 Heteroscedasticity â€“ Implications and Meanings Implications What happens as a result of heteroscedasticity? â€“Parameter estimates are correct not biased â€“Standard errors (hence p-values) are incorrect

394
394 However â€¦ If there is no skew in predicted scores â€“P-values a tiny bit wrong If skewed, â€“P-values can be very wrong Exercise 7.4

395
Robust SE Haiku T-stat looks too good. Use robust standard errors significance gone 395

396
396 Meaning What is heteroscedasticity trying to tell us? â€“Our model is wrong â€“ it is misspecified â€“Something important is happening that we have not accounted for e.g. amount of money given to charity ( given ) â€“depends on: earnings degree of importance person assigns to the charity ( import )

397
397 Do the regression analysis â€“R 2 = 0.60,, p < seems quite good â€“b 0 = 0.24, p =0.97 â€“b 1 = 0.71, p < â€“b 2 = 0.23, p = Whites test â€“ 2 = 18.6, df =5, p =0.002 The plot of predicted values against residuals â€¦

398
398 Plot shows heteroscedastic relationship

399
399 Which means â€¦ â€“the effects of the variables are not additive â€“If you think that what a charity does is important you might give more money how much more depends on how much money you have

400
400

401
401 One more thing about heteroscedasticity â€“it is the equivalent of homogeneity of variance in ANOVA/t-tests

402
Exercise 7.4, 7.5,

403
403 Assumption 3: The Error Term is Additive

404
404 Additivity What heteroscedasticity shows you â€“effects of variables need to be additive (assume no interaction between the variables) Heteroscedasticity doesnt always show it to you â€“can test for it, but hard work â€“(same as homogeneity of covariance assumption in ANCOVA) Have to know it from your theory A specification error

405
405 Additivity and Theory Two IVs â€“Alcohol has sedative effect A bit makes you a bit tired A lot makes you very tired â€“Some painkillers have sedative effect A bit makes you a bit tired A lot makes you very tired â€“A bit of alcohol and a bit of painkiller doesnt make you very tired â€“Effects multiply together, dont add together

406
406 If you dont test for it â€“Its very hard to know that it will happen So many possible non-additive effects â€“Cannot test for all of them â€“Can test for obvious In medicine â€“Choose to test for salient non-additive effects â€“e.g. sex, race More on this, when we look at moderators

407
Exercise 7.6 Exercise

408
408 Assumption 4: At every value of the outcome the expected (mean) value of the residuals is zero

409
409 Linearity Relationships between variables should be linear â€“best represented by a straight line Not a very common problem in social sciences â€“measures are not sufficiently accurate (much measurement error) to make a difference R 2 too low unlike, say, physics

410
410 Relationship between speed of travel and fuel used

411
411 R 2 = â€“looks pretty good â€“know speed, make a good prediction of fuel BUT â€“look at the chart â€“if we know speed we can make a perfect prediction of fuel used â€“R 2 should be 1.00

412
412 Detecting Non-Linearity Residual plot â€“just like heteroscedasticity Using this example â€“very, very obvious â€“usually pretty obvious

413
413 Residual plot

414
414 Linearity: A Case of Additivity Linearity = additivity along the range of the IV Jeremy rides his bicycle harder â€“Increase in speed depends on current speed â€“Not additive, multiplicative â€“MacCallum and Mar (1995). Distinguishing between moderator and quadratic effects in multiple regression. Psychological Bulletin.

415
415 Assumption 5: The expected correlation between residuals, for any two cases, is 0. The independence assumption (lack of autocorrelation)

416
416 Independence Assumption Also: lack of autocorrelation Tricky one â€“often ignored â€“exists for almost all tests All cases should be independent of one another â€“knowing the value of one case should not tell you anything about the value of other cases

417
417 How is it Detected? Can be difficult â€“need some clever statistics (multilevel models) Better off avoiding situations where it arises â€“Or handling it when it does arise Residual Plots

418
418 Residual Plots Were data collected in time order? â€“If so plot ID number against the residuals â€“Look for any pattern Test for linear relationship Non-linear relationship Heteroscedasticity

419
419

420
420 How does it arise? Two main ways time-series analyses â€“When cases are time periods weather on Tuesday and weather on Wednesday correlated inflation 1972, inflation 1973 are correlated clusters of cases â€“patients treated by three doctors â€“children from different classes â€“people assessed in groups

421
421 Why does it matter? Standard errors can be wrong â€“therefore significance tests can be wrong Parameter estimates can be wrong â€“really, really wrong â€“from positive to negative An example â€“students do an exam (on statistics) â€“choose one of three questions IV: time outcome: grade

422
422 Result, with line of best fit

423
423 Result shows that â€“people who spent longer in the exam, achieve better grades BUT â€¦ â€“we havent considered which question people answered â€“we might have violated the independence assumption outcome will be autocorrelated Look again â€“with questions marked

424
424 Now somewhat different

425
425 Now, people that spent longer got lower grades â€“questions differed in difficulty â€“do a hard one, get better grade â€“if you can do it, you can do it quickly

426
Dealing with Non- Independence For time series data â€“Time series analysis (another course) â€“Multilevel models (hard, some another course) For clustered data â€“Robust standard errors â€“Generalized estimating equations â€“Multilevel models 426

427
Cluster Robust Standard Errors Predictor: School size Outcome: Grades Sample: â€“20 schools â€“20 children per school What is the N? 427

428
Robust Standard Errors Sample is: â€“400 children â€“ is it 400? â€“Not really Each child adds information First child in a school adds lots of information about that school â€“100 th child in a school adds less information â€“How much less depends on how similar the children in the school are â€“20 schools Its more than

429
Robust SE in Stata Very easy reg predictor outcome, robust cluster(clusterid) BUT â€“Only to be used where clustering is a nuisance only Only adjusts standard errors, not parameter estimates Only to be used where parameter estimates shouldnt be affected by clustering 429

430
Example of Robust SE Effects of incentives for attendance at adult literacy class â€“Some students rewarded for attendance â€“Others not rewarded 152 classes randomly assigned to each condition â€“Scores measured at mid term and final 430

431
Example of Robust SE NaÃ¯ve â€“reg postscore tx midscore â€“ Est: SE: Clustered â€“reg postscore tx midscore, robust cluster(classid) â€“Est: SE

432
Problem with Robust Estimates Only corrects standard error â€“Does not correct estimate Other predictors must be uncorrelated with predictors of group membership â€“Or estimates wrong Two alternatives: â€“Generalized estimating equations (gee) â€“Multilevel models 432

433
Independence + Heteroscedasticity Assumption is that residuals are: â€“Independently and identically distributed i.i.d. Same procedure used for both problems â€“Really, same problem 433

434
Exercise 7.9, exercise

435
435 Assumption 6: All predictor variables are uncorrelated with the error term.

436
436 Uncorrelated with the Error Term A curious assumption â€“by definition, the residuals are uncorrelated with the predictors (try it and see, if you like) There are no other predictors that are important â€“That correlate with the error â€“i.e. Have an effect

437
437 Problem in economics â€“Demand increases supply â€“Supply increases wages â€“Higher wages increase demand OLS estimates will be (badly) biased in this case â€“need a different estimation procedure â€“two-stage least squares simultaneous equation modelling â€“Instrumental variables

438
Another Haiku Supply and demand: without a good instrument, not identified. 438

439
439 Assumption 7: No predictors are a perfect linear function of other predictors no perfect multicollinearity

440
440 No Perfect Multicollinearity IVs must not be linear functions of one another â€“matrix of correlations of IVs is not positive definite â€“cannot be inverted â€“analysis cannot proceed Have seen this with â€“age, age start, time working (cant have all three in the model) â€“also occurs with subscale and total in model at the same time

441
441 Large amounts of collinearity â€“a problem (as we shall see) sometimes â€“not an assumption Exercise 7.11

442
442 Assumption 8: The mean of the error term is zero. You will like this one.

443
443 Mean of the Error Term = 0 Mean of the residuals = 0 That is what the constant is for â€“if the mean of the error term deviates from zero, the constant soaks it up - note, Greek letters because we are talking about population values

444
444 Can do regression without the constant â€“Usually a bad idea â€“E.g R 2 = 0.995, p < Looks good

445
445

446
446 Lesson 8: Issues in Regression Analysis Things that alter the interpretation of the regression equation

447
447 The Four Issues Causality Sample sizes Collinearity Measurement error

448
448 Causality

449
449 What is a Cause? Debate about definition of cause â€“some statistics (and philosophy) books try to avoid it completely â€“We are not going into depth just going to show why it is hard Two dimensions of cause â€“Ultimate versus proximal cause â€“Determinate versus probabilistic

450
450 Proximal versus Ultimate Why am I here? â€“I walked here because â€“This is the location of the class because â€“Eric Tanenbaum asked me because â€“(I dont know) â€“because I was in my office when he rang because â€“I was a lecturer at Derby University because â€“I saw an advert in the paper because

451
451 â€“I exist because â€“My parents met because â€“My father had a job â€¦ Proximal cause â€“the direct and immediate cause of something Ultimate cause â€“the thing that started the process off â€“I fell off my bicycle because of the bump â€“I fell off because I was going too fast

452
452 Determinate versus Probabilistic Cause Why did I fall off my bicycle? â€“I was going too fast â€“But every time I ride too fast, I dont fall off â€“Probabilistic cause Why did my tyre go flat? â€“A nail was stuck in my tyre â€“Every time a nail sticks in my tyre, the tyre goes flat â€“Deterministic cause

453
453 Can get into trouble by mixing them together â€“Eating deep fried Mars Bars and doing no exercise are causes of heart disease â€“My Grandad ate three deep fried Mars Bars every day, and the most exercise he ever got was when he walked to the shop next door to buy one â€“(Deliberately?) confusing deterministic and probabilistic causes

454
454 Criteria for Causation Association (correlation) Direction of Influence (a b) Isolation (not c a and c b)

455
455 Association Correlation does not mean causation â€“we all know But â€“Causation does mean correlation Need to show that two things are related â€“may be correlation â€“may be regression when controlling for third (or more) factor

456
456 Relationship between price and sales â€“suppliers may be cunning â€“when people want it more stick the price up â€“ So â€“ no relationship between price and sales

457
457 â€“Until (or course) we control for demand â€“b 1 (Price) = â€“b 2 (Demand) = 0.94 But which variables do we enter?

458
458 Direction of Influence Relationship between A and B â€“three possible processes ABABAB C A causes B B causes A C causes A & B

459
459 How do we establish the direction of influence? â€“Longitudinally? Storm Barometer Drops â€“ Now if we could just get that barometer needle to stay where it is â€¦ Where the role of theory comes in (more on this later)

460
460 Isolation Isolate the outcome from all other influences â€“as experimenters try to do Cannot do this â€“can statistically isolate the effect â€“using multiple regression

461
461 Role of Theory Strong theory is crucial to making causal statements Fisher said: to make causal statements make your theories elaborate. â€“dont rely purely on statistical analysis Need strong theory to guide analyses â€“what critics of non-experimental research dont understand

462
462 S.J. Gould â€“ a critic â€“says correlate price of petrol and his age, for the last 10 years â€“find a correlation â€“Ha! (He says) that doesnt mean there is a causal link â€“Of course not! (We say). No social scientist would do that analysis without first thinking (very hard) about the possible causal relations between the variables of interest Would control for time, prices, etc â€¦

463
463 Atkinson, et al. (1996) â€“relationship between college grades and number of hours worked â€“negative correlation â€“Need to control for other variables â€“ ability, intelligence Gould says Most correlations are non- causal (1982, p243) â€“Of course!!!!

464
464 I drink a lot of beer 120 non-causal correlations 16 causal relations karaoke jokes (about statistics) children wake early bathroom headache sleeping equations (beermat) laugh thirsty fried breakfast no beer curry chips falling over lose keys curtains closed

465
465 Abelson (1995) elaborates on this â€“method of signatures A collection of correlations relating to the process â€“the signature of the process e.g. tobacco smoking and lung cancer â€“can we account for all of these findings with any other theory?

466
466 1.The longer a person has smoked cigarettes, the greater the risk of cancer. 2.The more cigarettes a person smokes over a given time period, the greater the risk of cancer. 3.People who stop smoking have lower cancer rates than do those who keep smoking. 4.Smokers cancers tend to occur in the lungs, and be of a particular type. 5.Smokers have elevated rates of other diseases. 6.People who smoke cigars or pipes, and do not usually inhale, have abnormally high rates of lip cancer. 7.Smokers of filter-tipped cigarettes have lower cancer rates than other cigarette smokers. 8.Non-smokers who live with smokers have elevated cancer rates. (Abelson, 1995: )

467
467 â€“In addition, should be no anomalous correlations If smokers had more fallen arches than non- smokers, not consistent with theory Failure to use theory to select appropriate variables â€“specification error â€“e.g. in previous example â€“Predict wealth from price and sales increase price, price increases Increase sales, price increases

468
468 Sometimes these are indicators of the process, not the process itself â€“e.g. barometer â€“ stopping the needle wont help â€“e.g. inflation? Indicator or cause of economic health?

469
469 No Causation without Experimentation Blatantly untrue â€“I dont doubt that the sun shining makes us warm Why the aversion? â€“Pearl (2000) says problem is that there is no mathematical operator (e.g. =) â€“No one realised that you needed one â€“Until you build a robot

470
470 AI and Causality A robot needs to make judgements about causality Needs to have a mathematical representation of causality â€“Suddenly, a problem! â€“Doesnt exist Most operators are non-directional Causality is directional

471
471 Sample Sizes How many subjects does it take to run a regression analysis?

472
472 Introduction Social scientists dont worry enough about the sample size required â€“Why didnt you get a significant result? â€“I didnt have a large enough sample Not a common answer, but very common reason More recently awareness of sample size is increasing â€“use too few â€“ no point doing the research â€“use too many â€“ waste their time

473
473 Research funding bodies Ethical review panels â€“both become more interested in sample size calculations We will look at two approaches â€“Rules of thumb (quite quickly) â€“Power Analysis (more slowly)

474
474 Rules of Thumb Lots of simple rules of thumb exist â€“10 cases per IV â€“and at least 100 cases â€“Green (1991) more sophisticated To test significance of R 2 â€“ N = k To test significance of slopes, N = k Rules of thumb dont take into account all the information that we have â€“Power analysis does

475
475 Power Analysis Introducing Power Analysis Hypothesis test â€“tells us the probability of a result of that magnitude occurring, if the null hypothesis is correct (i.e. there is no effect in the population) Doesnt tell us â€“the probability of that result, if the null hypothesis is false (i.e., there actually is an effect in the population)

476
476 According to Cohen (1982) all null hypotheses are false â€“everything that might have an effect, does have an effect it is just that the effect is often very tiny

477
477 Type I Errors Type I error is false rejection of H 0 Probability of making a type I error â€“ â€“ the significance value cut-off usually 0.05 (by convention) Always this value Not affected by â€“sample size â€“type of test

478
478 Type II errors Type II error is false acceptance of the null hypothesis â€“Much, much trickier We think we have some idea â€“we almost certainly dont Example â€“I do an experiment (random sampling, all assumptions perfectly satisfied) â€“I find p = 0.05

479
479 â€“You repeat the experiment exactly different random sample from same population â€“What is probability you will find p < 0.05? â€“Answer: 0.5 â€“Another experiment, I find p = 0.01 â€“Probability you find p < 0.05? â€“Answer: 0.79 Very hard to work out â€“not intuitive â€“need to understand non-central sampling distributions (more in a minute)

480
480 Probability of type II error = beta ( ) â€“same as population regression parameter (to be confusing) Power = 1 â€“ Beta â€“Probability of getting a significant result (given that there is a significant result to be found)

481
481 Type I error p = Type II error p = power = 1 - H 0 true (we find no effect â€“ p > 0.05) H 0 false (we find an effect â€“ p < 0.05) Research Findings H 0 false (effect to be found) H 0 True (no effect to be found) State of the World

482
482 Four parameters in power analysis â€“ â€“ prob. of Type I error â€“ â€“ prob. of Type II error (power = 1 â€“ ) â€“Effect size â€“ size of effect in population â€“N Know any three, can calculate the fourth â€“Look at them one at a time

483
483 Probability of Type I error â€“Usually set to 0.05 â€“Somewhat arbitrary sometimes adjusted because of circumstances â€“rarely because of power analysis â€“May want to adjust it, based on power analysis

484
484 â€“ Probability of type II error â€“Power (probability of finding a result) = 1 â€“ â€“Standard is 80% Some argue for 90% â€“Implication that Type I error is 4 times more serious than type II error adjust ratio with compromise power analysis

485
485 Effect size in the population â€“Most problematic to determine â€“Three ways 1.What effect size would be useful to find? R 2 = no use (probably) 2.Base it on previous research â€“what have other people found? 3.Use Cohens conventions â€“small R 2 = 0.02 â€“medium R 2 = 0.13 â€“large R 2 = 0.26

486
486 â€“Effect size usually measured as f 2 â€“For R 2

487
487 â€“For (standardised) slopes â€“Where sr 2 is the contribution to the variance accounted for by the variable of interest â€“i.e. sr 2 = R 2 (with variable) â€“ R 2 (without) change in R 2 in hierarchical regression

488
488 N â€“ the sample size â€“usually use other three parameters to determine this â€“sometimes adjust other parameters ( ) based on this â€“e.g. You can have 50 participants. No more.

489
489 Doing power analysis With power analysis program â€“SamplePower, Gpower (free), Nquery â€“With Stata command sampsi Which I find very confusing But well use it anyway

490
sampsi Limited in usefulness â€“A categorical, two group predictor sampsi 0 0.5, pre(1) r01(0.5) n1(50) sd(1) â€“Find power for detecting an effect of 0.5 When theres one other variable at baseline Which correlates people in each group When sd is

491
sampsi â€¦ Method: ANCOVA relative efficiency = adjustment to sd = adjusted sd1 = Estimated power: power =

492
GPower Better for regression designs 492

493

494

495
495 Underpowered Studies Research in the social sciences is often underpowered â€“Why? â€“See Paper B11 â€“ the persistence of underpowered studies

496
496 Extra Reading Power traditionally focuses on p values â€“What about CIs? â€“Paper B8 â€“ Obtaining regression coefficients that are accurate, not simply significant

497
Exercise

498
498 Collinearity

499
499 Collinearity as Issue and Assumption Collinearity (multicollinearity) â€“the extent to which the predictors are (multiply) correlated If R 2 for any IV, using other IVs = 1.00 â€“perfect collinearity â€“variable is linear sum of other variables â€“regression will not proceed â€“(SPSS will arbitrarily throw out a variable)

500
500 R 2 < 1.00, but high â€“other problems may arise Four things to look at in collinearity â€“meaning â€“implications â€“detection â€“actions

501
501 Meaning of Collinearity Literally co-linearity â€“lying along the same line Perfect collinearity â€“when some IVs predict another â€“Total = S1 + S2 + S3 + S4 â€“S1 = Total â€“ (S2 + S3 + S4) â€“rare

502
502 Less than perfect â€“when some IVs are close to predicting other IVs â€“correlations between IVs are high (usually, but not always) high multiple correlations

503
503 Implications Effects the stability of the parameter estimates â€“and so the standard errors of the parameter estimates â€“and so the significance and CIs Because â€“shared variance, which the regression procedure doesnt know where to put

504
504 Sex differences â€“due to genetics? â€“due to upbringing? â€“(almost) perfect collinearity statistically impossible to tell

505
505 When collinearity is less than perfect â€“increases variability of estimates between samples â€“estimates are unstable â€“reflected in the variances, and hence standard errors

506
506 Detecting Collinearity Look at the parameter estimates â€“large standardised parameter estimates (>0.3?), which are not significant be suspicious Run a series of regressions â€“each IV as outcome â€“all other IVs as IVs for each IV

507
507 Sounds like hard work? â€“SPSS does it for us! Ask for collinearity diagnostics â€“Tolerance â€“ calculated for every IV â€“Variance Inflation Factor sq. root of amount s.e. has been increased

508
508 Actions What you can do about collinearity no quick fix (Fox, 1991) 1.Get new data avoids the problem address the question in a different way e.g. find people who have been raised as the wrong gender exist, but rare Not a very useful suggestion

509
509 2.Collect more data not different data, more data collinearity increases standard error (se) se decreases as N increases get a bigger N 3.Remove / Combine variables If an IV correlates highly with other IVs Not telling us much new If you have two (or more) IVs which are very similar e.g. 2 measures of depression, socio- economic status, achievement, etc

510
510 sum them, average them, remove one Many measures use principal components analysis to reduce them 3.Use stepwise regression (or some flavour of) See previous comments Can be useful in theoretical vacuum 4.Ridge regression not very useful behaves weirdly

511
Exercise 8.2, 8.3,

512
512 Measurement Error

513
513 What is Measurement Error In social science, it is unlikely that we measure any variable perfectly â€“measurement error represents this imperfection We assume that we have a true score â€“ T A measure of that score â€“x

514
514 just like a regression equation â€“standardise the parameters â€“T is the reliability the amount of variance in x which comes from T but, like a regression equation â€“assume that e is random and has mean of zero â€“more on that later

515
515 Simple Effects of Measurement Error Lowers the measured correlation â€“between two variables Real correlation â€“true scores ( x * and y *) Measured correlation â€“measured scores ( x and y )

516
516 x*x* y*y* yx e Reliability of y r yy Reliability of x r xx True correlation of x and y r x*y* Measured correlation of x and y r xy e

517
517 Attenuation of correlation Attenuation corrected correlation

518
518 Example

519
519 Complex Effects of Measurement Error Really horribly complex Measurement error reduces correlations â€“reduces estimate of â€“reducing one estimate increases others â€“because of effects of control â€“combined with effects of suppressor variables â€“exercise to examine this

520
520 Dealing with Measurement Error Attenuation correction â€“very dangerous â€“not recommended Avoid in the first place â€“use reliable measures â€“dont discard information dont categorise Age: 10-20, 21-30, â€¦

521
521 Complications Assume measurement error is â€“additive â€“linear Additive â€“e.g. weight â€“ people may under-report / over- report at the extremes Linear â€“particularly the case when using proxy variables

522
522 e.g. proxy measures â€“Want to know effort on childcare, count number of children 1 st child is more effort than 19 th child â€“Want to know financial status, count income 1 st Â£1 much greater effect on financial status than the 1,000,000 th.

523
Exercise

524
524 Lesson 9: Non-Linear Analysis in Regression

525
525 Introduction Non-linear effect occurs â€“when the effect of one predictor â€“is not consistent across the range of the IV Assumption is violated â€“expected value of residuals = 0 â€“no longer the case

526
526 Some Examples

527
527 Experience Skill A Learning Curve

528
528 Arousal Performance Yerkes-Dodson Law of Arousal

529
529 Time Enthusiastic Enthusiasm Levels over a Lesson on Regression 03.5 Suicidal

530
530 Learning â€“line changed direction once Yerkes-Dodson â€“line changed direction once Enthusiasm â€“line changed direction twice

531
531 Everything is Non-Linear Every relationship we look at is non- linear, for two reasons â€“Exam results cannot keep increasing with reading more books Linear in the range we examine â€“For small departures from linearity Cannot detect the difference Non-parsimonious solution

532
532 Non-Linear Transformations

533
533 Bending the Line Non-linear regression is hard â€“We cheat, and linearise the data Do linear regression Transformations We need to transform the data â€“rather than estimating a curved line which would be very difficult may not work with OLS â€“we can take a straight line, and bend it â€“or take a curved line, and straighten it back to linear (OLS) regression

534
534 We still do linear regression â€“Linear in the parameters â€“Y = b 1 x + b 2 x 2 + â€¦ Can do non-linear regression â€“Non-linear in the parameters â€“Y = b 1 x + b 2 x2 + â€¦ Much trickier â€“Statistical theory either breaks down OR becomes harder

535
535 Linear transformations â€“multiply by a constant â€“add a constant â€“change the slope and the intercept

536
536 x y y=x y=2x y=x + 3

537
537 Linear transformations are no use â€“alter the slope and intercept â€“dont alter the standardised parameter estimate Non-linear transformation â€“will bend the slope â€“quadratic transformation y = x 2 â€“one change of direction

538
538 â€“Cubic transformation y = x 2 + x 3 â€“two changes of direction

539
539 To estimate a non-linear regression â€“we dont actually estimate anything non- linear â€“we transform the x -variable to a non-linear version â€“can estimate that straight line â€“represents the curve â€“we dont bend the line, we stretch the space around the line, and make it flat

540
540 Detecting Non-linearity

541
541 Draw a Scatterplot Draw a scatterplot of y plotted against x â€“see if it looks a bit non-linear â€“e.g. Education and beginning salary from bank data with line of best fit

542
542 A Real Example Starting salary and years of education â€“From employee data.sav

543
543 Expected value of error (residual) is > 0 Expected value of error (residual) is < 0

544
544 Use Residual Plot Scatterplot is only good for one variable â€“use the residual plot (that we used for heteroscedasticity) Good for many variables

545
545 We want â€“points to lie in a nice straight sausage

546
546 We dont want â€“a nasty bent sausage

547
547 Educational level and starting salary

548
548 Carrying Out Non-Linear Regression

549
549 Linear Transformation Linear transformation doesnt change â€“interpretation of slope â€“standardised slope â€“se, t, or p of slope â€“R 2 Can change â€“effect of a transformation

550
550 Actually more complex â€“with some transformations can add a constant with no effect (e.g. quadratic) With others does have an effect â€“inverse, log Sometimes it is necessary to add a constant â€“negative numbers have no square root â€“0 has no log

551
551 Education and Salary Linear Regression Saw previously that the assumption of expected errors = 0 was violated Anyway â€¦ â€“R 2 = 0.401, p < â€“salbegin = educ â€“Standardised b 1 (educ) = â€“Both parameters make sense

552
552 Non-linear Effect Compute new variable â€“quadratic â€“educ2 = educ 2 Add this variable to the equation â€“R 2 = 0.585, p < â€“salbegin = educ educ 2 slightly curious â€“Standardised b 1 (educ) = -2.4 b 2 (educ 2 ) = 3.1 â€“What is going on?

553
553 Collinearity â€“is what is going on â€“Correlation of educ and educ 2 r = â€“Regression equation becomes difficult (impossible?) to interpret Need hierarchical regression â€“what is the change in R 2 â€“is that change significant? â€“R 2 (change) = 0.184, p < 0.001

554 554 Cubic Effect While we are at it, lets look at the cubic effect â€“R 2 (change) = 0.004, p = â€“ e e e 3 â€“Standardised: b 1 (e) = 0.04 b 2 (e 2 ) = b 3 (e 3 ) = 2.71