Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Applying Regression. 2 The Course 14 (or so) lessons –Some flexibility Depends how we feel What we get through.

Similar presentations

Presentation on theme: "1 Applying Regression. 2 The Course 14 (or so) lessons –Some flexibility Depends how we feel What we get through."— Presentation transcript:

1 1 Applying Regression

2 2 The Course 14 (or so) lessons –Some flexibility Depends how we feel What we get through

3 3 Part I: Theory of Regression 1.Models in statistics 2.Models with more than one parameter: regression 3.Samples to populations 4.Introducing multiple regression 5.More on multiple regression

4 4 Part 2: Application of regression 6.Categorical predictor variables 7.Assumptions in regression analysis 8.Issues in regression analysis 9.Non-linear regression 10.Categorical and count variables 11.Moderators (interactions) in regression 12.Mediation and path analysis Part 3:Taking Regression Further (Kind of brief) 13.Introducing longitudinal multilevel models

5 Bonuses Bonus lesson1: Why is it called regression? Bonus lesson 2: Other types of regression. 5

6 6 House Rules Jeremy must remember –Not to talk too fast If you dont understand –Ask –Any time If you think Im wrong –Ask. (Im not always right)

7 The Assistants Carla Xena - Eugenia Suarez Moran Arian Daneshmand-

8 8 Learning New Techniques Best kind of data to learn a new technique –Data that you know well, and understand Your own data –In computer labs (esp later on) –Use your own data if you like My data –Ill provide you with –Simple examples, small sample sizes Conceptually simple (even silly)

9 9 Computer Programs Stata –Mostly Ill explain SPSS options Youll like Stata more Excel –For calculations –Semi-optional GPower

10 10 Lesson 1: Models in statistics Models, parsimony, error, mean, OLS estimators

11 11 What is a Model?

12 12 What is a model? Representation –Of reality –Not reality Model aeroplane represents a real aeroplane –If model aeroplane = real aeroplane, it isnt a model

13 13 Statistics is about modelling –Representing and simplifying Sifting –What is important from what is not important Parsimony –In statistical models we seek parsimony –Parsimony simplicity

14 14 Parsimony in Science A model should be: –1: able to explain a lot –2: use as few concepts as possible More it explains –The more you get Fewer concepts –The lower the price Is it worth paying a higher price for a better model?

15 15 The Mean as a Model

16 16 The (Arithmetic) Mean We all know the mean –The average –Learned about it at school –Forget (didnt know) about how clever the mean is The mean is: –An Ordinary Least Squares (OLS) estimator –Best Linear Unbiased Estimator (BLUE)

17 17 Mean as OLS Estimator Going back a step or two MODEL was a representation of DATA –We said we want a model that explains a lot –How much does a model explain? DATA = MODEL + ERROR ERROR = DATA - MODEL –We want a model with as little ERROR as possible

18 18 What is error? Error (e)Model (b 0 ) mean Data (Y)

19 19 How can we calculate the amount of error? Sum of errors? Sum of absolute errors?

20 20 Are small and large errors equivalent? –One error of 4 –Four errors of 1 –The same? –What happens with different data? Y = (2, 2, 5) – b 0 = 2 –Not very representative Y = (2, 2, 4, 4) –b 0 = any value from –Indeterminate There are an infinite number of solutions which would satisfy our criteria for minimum error

21 21 Sum of squared errors (SSE)

22 22 Determinate –Always gives one answer If we minimise SSE –Get the mean Shown in graph –SSE plotted against b 0 –Min value of SSE occurs when –b 0 = mean

23 23

24 24 The Mean as an OLS Estimate

25 25 Mean as OLS Estimate The mean is an Ordinary Least Squares (OLS) estimate –As are lots of other things This is exciting because –OLS estimators are BLUE –Best Linear Unbiased Estimators –Proven with Gauss-Markov Theorem Which we wont worry about

26 26 BLUE Estimators Best –Minimum variance (of all possible unbiased estimators) –Narrower distribution than other estimators e.g. median, mode

27 27 SSE and the Standard Deviation Tying up a loose end

28 28 SSE closely related to SD Sample standard deviation – s –Biased estimator of population SD Population standard deviation - –Need to know the mean to calculate SD Reduces N by 1 Hence divide by N -1, not N –Like losing one df

29 29 Proof That the mean minimises SSE –Not that difficult –As statistical proofs go Available in –Maxwell and Delaney – Designing experiments and analysing data –Judd and McClelland – Data Analysis: a model comparison approach (out of print?)

30 30 Whats a df? The number of parameters free to vary –When one is fixed Term comes from engineering –Movement available to structures

31 31 Back to the Data Mean has 5 ( N ) df – 1 st moment has N –1 df –Mean has been fixed –2 nd moment –Can think of it as amount of cases vary away from the mean

32 32 While we are at it … Skewness has N – 2 df –3 rd moment Kurtosis has N – 3 df –4 rd moment –Amount cases vary from

33 33 Parsimony and df Number of df remaining –Measure of parsimony Model which contained all the data –Has 0 df –Not a parsimonious model Normal distribution –Can be described in terms of mean and 2 parameters –( z with 0 parameters)

34 34 Summary of Lesson 1 Statistics is about modelling DATA –Models have parameters –Fewer parameters, more parsimony, better Models need to minimise ERROR –Best model, least ERROR –Depends on how we define ERROR –If we define error as sum of squared deviations from predicted value –Mean is best MODEL

35 Lesson 1a A really brief introduction to Stata 35

36 Commands 36 Command review Variable list Commands Output

37 Stata Commands Can use menus –But commands are easy All have similar format: command variables, options Stata is case sensitive –BEDS, beds, Beds Stata lets you shorten –summarize sqft –su sq 37

38 More Stata Commands Open exercise 1.4.dta –Run summarize sqm table beds mean price histogram price –Or su be tab be mean pr hist pr 38

39 39 Lesson 2: Models with one more parameter - regression

40 40 In Lesson 1 we said … Use a model to predict and describe data –Mean is a simple, one parameter model

41 41 More Models Slopes and Intercepts

42 42 More Models The mean is OK –As far as it goes –It just doesnt go very far –Very simple prediction, uses very little information We often have more information than that –We want to use more information than that

43 43 House Prices Look at house prices in one area of Los Angeles Predictors of house prices Using: –Sale price, size, number of bedrooms, size of lot, year built …



46 46 House Prices address listpricebedsbathssqft 3628 OLYMPIAD Dr OLYMPIAD Dr CHANSON Dr West 58TH Pl West 58TH Pl FAIRWAY Blvd OLYMPIAD Dr DON LUIS Dr West 59TH St WHELAN Pl West 63RD St ANGELES VISTA Blvd

47 47 One Parameter Model The mean How much is that house worth? $415,689 Use 1 df to say that

48 48 Adding More Parameters We have more information than this –We might as well use it –Add a linear function of number of size (square feet) ( x 1 )

49 49 Alternative Expression Estimate of Y (expected value of Y ) Value of Y

50 50 Estimating the Model We can estimate this model in four different, equivalent ways –Provides more than one way of thinking about it 1. Estimating the slope which minimises SSE 2. Examining the proportional reduction in SSE 3. Calculating the covariance 4. Looking at the efficiency of the predictions

51 51 Estimate the Slope to Minimise SSE

52 52 Estimate the Slope Stage 1 –Draw a scatterplot – x -axis at mean Not at zero Mark errors on it –Called residuals –Sum and square these to find SSE

53 53


55 55 Add another slope to the chart –Redraw residuals –Recalculate SSE –Move the line around to find slope which minimises SSE Find the slope

56 56 First attempt:

57 57 Any straight line can be defined with two parameters –The location (height) of the slope b 0 –Sometimes called –The gradient of the slope b 1

58 58 Gradient 1 unit b 1 units

59 59 Height b 0 units

60 60 Height If we fix slope to zero –Height becomes mean –Hence mean is b 0 Height is defined as the point that the slope hits the y -axis –The constant –The y -intercept

61 61 Why the constant? – b 0 x 0 –Where x 0 is 1.00 for every case i.e. x 0 is constant Implicit in Stata –(And SPSS, SAS, R) –Some packages force you to make it explicit –(Later on well need to make it explicit)

62 62 Why the intercept? –Where the regression line intercepts the y - axis –Sometimes called y -intercept

63 63 Finding the Slope How do we find the values of b 0 and b 1 ? –Start with we jiggle the values, to find the best estimates which minimise SSE –Iterative approach Computer intensive – used to matter, doesnt really any more (With fast computers and sensible search algorithms – more on that later)

64 64 Start with – b 0 =416 (mean) – b 1 =0.5 (nice round number) SSE = 365,774 – b 0 =300, b 1 =0.5, SSE=341,683 –b 0 =300, b 1 =0.6, SSE=310,240 –b 0 =300, b 1 =0.8, SSE=264,573 –b 0 =300, b 1 =1, SSE=301, 797 –b 0 =250, b 1 =1, SSE=255,366 –…..

65 65 Quite a long time later – b 0 = – b 1 = – SSE = 145, Gives the position of the –Regression line (or) –Line of best fit Better than guessing Not necessarily the only method –But it is OLS, so it is the best (it is BLUE)

66 66

67 67 We now know –A zero square metre house is worth $216,000 –Adding a square meter adds $1,080 Told us two things –Dont extrapolate to meaningless values of x -axis –Constant is not necessarily useful It is necessary to estimate the equation

68 Exercise 2a, 2b 68

69 69 Standardised Regression Line One big but: –Scale dependent Values change –£ to, inflation Scales change –£, £000, £00? Need to deal with this

70 70 Dont express in raw units –Express in SD units – x 1 = – y = b 1 = We increase x 1 by 1, and Ŷ increases by So we increase x 1 by 1 and Ŷ increases by SDs

71 71 Similarly, 1 unit of x 1 = 1/ SDs –Increase x 1 by 1 SD –Ŷ increases by (69.017/1) = Put them both together

72 72 The standardised regression line –Change (in SDs) in Ŷ associated with a change of 1 SD in x 1 A different route to the same answer –Standardise both variables (divide by SD) –Find line of best fit

73 73 The standardised regression line has a special name The Correlation Coefficient ( r ) (r stands for regression, but more on that later) Correlation coefficient is a standardised regression slope –Relative change, in terms of SDs

74 Exercise 2c 74

75 75 Proportional Reduction in Error

76 76 Proportional Reduction in Error We might be interested in the level of improvement of the model –How much less error (as proportion) do we have –Proportional Reduction in Error (PRE) Mean only –Error(model 0) = 341,683 Mean + slope –Error(model 1) = 196,046

77 77

78 78 But we squared all the errors in the first place –So we could take the square root This is the correlation coefficient Correlation coefficient is the square root of the proportion of variance explained

79 79 Standardised Covariance

80 80 Standardised Covariance We are still iterating –Need a closed-form equation –Equation to solve to get the parameter estimates Answer is a standardised covariance –A variable has variance –Amount of differentness We have used SSE so far

81 81 SSE varies with N –Higher N, higher SSE Divide by N –Gives SSE per person (or house) –(Actually N – 1, we have lost a df to the mean) Gives us the variance Same as SD 2 –We thought of SSE as a scattergram Y plotted against X –(repeated image follows)

82 82

83 83 Or we could plot Y against Y –Axes meet at the mean (415) –Draw a square for each point –Calculate an area for each square –Sum the areas Sum of areas –SSE Sum of areas divided by N –Variance

84 84 Plot of Y against Y

85 Draw Squares 35 – 88.9 = – 88.9 = – 88.9 = – 88.9 = 40.1 Area = x = Area = 40.1 x 40.1 =

86 86 What if we do the same procedure –Instead of Y against Y –Y against X Draw rectangles (not squares) Sum the area Divide by N - 1 This gives us the variance of x with y –The Covariance –Shortened to Cov( x, y )

87 87

88 88 55 – 88.9 = = -2 Area = (-33.9) x (-2) = = = 49.1 Area = 49.1 x 1 = 49.1

89 89 More formally (and easily) We can state what we are doing as an equation –Where Cov( x, y ) is the covariance Cov( x, y )=5165 What do points in different sectors do to the covariance?

90 90 Problem with the covariance –Tells us about two things –The variance of X and Y –The covariance Need to standardise it –Like the slope Two ways to standardise the covariance –Standardise the variables first Subtract from mean and divide by SD –Standardise the covariance afterwards

91 91 First approach –Much more computationally expensive Too much like hard work to do by hand –Need to standardise every value Second approach –Much easier –Standardise the final value only Need the combined variance –Multiply two variances –Find square root (were multiplied in first place)

92 92 Standardised covariance

93 93 The correlation coefficient –A standardised covariance is a correlation coefficient

94 94 Expanded …

95 95 This means … –We now have a closed form equation to calculate the correlation –Which is the standardised slope –Which we can use to calculate the unstandardised slope

96 96 We know that:

97 97 So value of b 1 is the same as the iterative approach

98 98 The intercept –Just while we are at it The variables are centred at zero –We subtracted the mean from both variables –Intercept is zero, because the axes cross at the mean

99 99 Add mean of y to the constant –Adjusts for centring y Subtract mean of x –But not the whole mean of x –Need to correct it for the slope

100 100 Accuracy of Prediction

101 101 One More (Last One) We have one more way to calculate the correlation –Looking at the accuracy of the prediction Use the parameters – b 0 and b 1 –To calculate a predicted value for each case

102 102 Plot actual price against predicted price –From the model

103 103

104 104 r = The correlation between actual and predicted value Seems a futile thing to do –And at this stage, it is –But later on, we will see why

105 105 Some More Formulae For hand calculation Point biserial

106 106 Phi ( ) –Used for 2 dichotomous variables Vote PVote Q HomeownerA: 19B: 54 Not homeownerC: 60D:53

107 107 Problem with the phi correlation –Unless P x = P y (or P x = 1 – P y ) Maximum (absolute) value is < 1.00 Tetrachoric correlation can be used to correct this Rank (Spearman) correlation –Used where data are ranked

108 108 Summary Mean is an OLS estimate –OLS estimates are BLUE Regression line –Best prediction of outcome from predictor –OLS estimate (like mean) Standardised regression line –A correlation

109 109 Four ways to think about a correlation –1. Standardised regression line –2. Proportional Reduction in Error (PRE) –3. Standardised covariance –4. Accuracy of prediction

110 Regression and Correlation in Stata Correlation: correlate x y correlate x y, cov regress y x Or regress price sqm 110

111 Post-Estimation Stata commands leave behind something You can run post-estimation commands –They mean from the last regression Get predicted values: –predict my_preds Get residuals: –predict my_res, residuals 111 This comes after the comma, so its an option

112 Graphs Scatterplot scatter price beds Regression line –lfit price beds Both graphs –twoway (scatter price beds) (lfit price beds) 112

113 What happens if you run reg without a predictor? –regress price 113

114 Exercises 114

115 115 Lesson 3: Samples to Populations – Standard Errors and Statistical Significance

116 116 The Problem In Social Sciences –We investigate samples Theoretically –Randomly taken from a specified population –Every member has an equal chance of being sampled –Sampling one member does not alter the chances of sampling another Not the case in (say) physics, biology, etc.

117 117 Population But its the population that we are interested in –Not the sample –Population statistic represented with Greek letter –Hat means estimate

118 118 Sample statistics (e.g. mean) estimate population parameters Want to know –Likely size of the parameter –If it is > 0

119 119 Sampling Distribution We need to know the sampling distribution of a parameter estimate –How much does it vary from sample to sample If we make some assumptions –We can know the sampling distribution of many statistics –Start with the mean

120 120 Sampling Distribution of the Mean Given –Normal distribution –Random sample –Continuous data Mean has a known sampling distribution –Repeatedly sampling will give a known distribution of means –Centred around the true (population) mean ( )

121 121 Analysis Example: Memory Difference in memory for different words –10 participants given a list of 30 words to learn, and then tested –Two types of word Abstract: e.g. love, justice Concrete: e.g. carrot, table

122 122

123 123 Confidence Intervals This means –If we know the mean in our sample –We can estimate where the mean in the population ( ) is likely to be Using –The standard error (se) of the mean –Represents the standard deviation of the sampling distribution of the mean

124 124 Almost 2 SDs contain 95% 1 SD contains 68%

125 125 We know the sampling distribution of the mean – t distributed if N < 30 –Normal with large N (>30) Asymptotically normal Know the range within means from other samples will fall –Therefore the likely range of

126 126 Two implications of equation –Increasing N decreases SE But only a bit (SE halfs if N is 400 times bigger) –Decreasing SD decreases SE Calculate Confidence Intervals –From standard errors 95% is a standard level of CI –95% of samples the true mean will lie within the 95% CIs –In large samples: 95% CI = 1.96 SE –In smaller samples: depends on t distribution ( df =N-1=9)

127 127

128 128

129 129 What is a CI? (For 95% CI): 95% chance that the true (population) value lies within the confidence interval? No; 95% of samples, true mean will land within the confidence interval?

130 130 Significance Test Probability that is a certain value –Almost always 0 Doesnt have to be though We want to test the hypothesis that the difference is equal to 0 – i.e. find the probability of this difference occurring in our sample IF =0 –(Not the same as the probability that =0)

131 131 Calculate SE, and then t – t has a known sampling distribution –Can test probability that a certain value is included

132 132 Other Parameter Estimates Same approach –Prediction, slope, intercept, predicted values –At this point, prediction and slope are the same Wont be later on One predictor only –More complicated with > 1

133 133 Testing the Degree of Prediction Prediction is correlation of Y with Ŷ –The correlation – when we have one IV Use F, rather than t Started with SSE for the mean only –This is SS total –Divide this into SS residual –SS regression SS tot = SS reg + SS res

134 134

135 135 Back to the house prices –Original SSE (SS total ) = –SS residual = What is left after our model –SS regression = – = What our model explains

136 136

137 137 F = 18.6, df = 1, 25, p = –Can reject H 0 H 0 : Prediction is not better than chance –A significant effect

138 138 Statistical Significance: What does a p-value (really) mean?

139 139 A Quiz Six questions, each true or false Write down your answers (if you like) An experiment has been done. Carried out perfectly. All assumptions perfectly satisfied. Absolutely no problems. P = 0.01 –Which of the following can we say?

140 You have absolutely disproved the null hypothesis (that is, there is no difference between the population means).

141 You have found the probability of the null hypothesis being true.

142 You have absolutely proved your experimental hypothesis (that there is a difference between the population means).

143 You can deduce the probability of the experimental hypothesis being true.

144 You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

145 You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of times, you would obtain a significant result on 99% of occasions.

146 146 OK, What is a p-value Cohen (1994) [a p-value] does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless believe it does (p 997).

147 147 OK, What is a p-value Sorry, didnt answer the question Its The probability of obtaining a result as or more extreme than the result we have in the study, given that the null hypothesis is true Not probability the null hypothesis is true

148 148 A Bit of Notation Not because we like notation –But we have to say a lot less Probability – P Null hypothesis is true – H Result (data) – D Given - |

149 149 Whats a P Value P(D|H) –Probability of the data occurring if the null hypothesis is true Not P(H|D) (what we want to know) –Probability that the null hypothesis is true, given that we have the data = p(H) P(H|D) P(D|H)

150 150 What is probability you are prime minister –Given that you are British –P(M|B) –Very low What is probability you are British –Given you are prime minister –P(B|M) –Very high P(M|B) P(B|M)

151 151 Theres been a murder –Someone murdered an instructor (perhaps they talked too much) The police have DNA The police have your DNA –They match(!) DNA matches 1 in 1,000,000 people Whats the probability you didnt do the murder, given the DNA match (H|D)

152 152 Police say: –P(D|H) = 1/1,000,000 Luckily, you have Jeremy on your defence team We say: –P(D|H) P(H|D) Probability that someone matches the DNA, who didnt do the murder –Incredibly high

153 153 Back to the Questions Haller and Kraus (2002) –Asked those questions of groups in Germany –Psychology Students –Psychology lecturers and professors (who didnt teach stats) –Psychology lecturers and professors (who did teach stats)

154 154 1.You have absolutely disproved the null hypothesis (that is, there is no difference between the population means). True 34% of students 15% of professors/lecturers, 10% of professors/lecturers teaching statistics False We have found evidence against the null hypothesis

155 155 2.You have found the probability of the null hypothesis being true. –32% of students –26% of professors/lecturers –17% of professors/lecturers teaching statistics False We dont know

156 You have absolutely proved your experimental hypothesis (that there is a difference between the population means). –20% of students –13% of professors/lecturers –10% of professors/lecturers teaching statistics False

157 157 4.You can deduce the probability of the experimental hypothesis being true. –59% of students –33% of professors/lecturers –33% of professors/lecturers teaching statistics False

158 158 5.You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision. 68% of students 67% of professors/lecturers 73% of professors professors/lecturers teaching statistics False Can be worked out –P(replication)

159 159 6.You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of times, you would obtain a significant result on 99% of occasions. –41% of students –49% of professors/lecturers –37% of professors professors/lecturers teaching statistics False Another tricky one –It can be worked out

160 160 One Last Quiz I carry out a study –All assumptions perfectly satisfied –Random sample from population –I find p = 0.05 You replicate the study exactly –What is probability you find p < 0.05?

161 161 I carry out a study –All assumptions perfectly satisfied –Random sample from population –I find p = 0.01 You replicate the study exactly –What is probability you find p < 0.05?

162 162 Significance testing creates boundaries and gaps where none exist. Significance testing means that we find it hard to build upon knowledge –we dont get an accumulation of knowledge

163 163 Yates (1951) "the emphasis given to formal tests of significance... has resulted in... an undue concentration of effort by mathematical statisticians on investigations of tests of significance applicable to problems which are of little or no practical importance... and... it has caused scientific research workers to pay undue attention to the results of the tests of significance... and too little to the estimates of the magnitude of the effects they are investigating

164 164 Testing the Slope Same idea as with the mean –Estimate 95% CI of slope –Estimate significance of difference from a value (usually 0) Need to know the SD of the slope –Similar to SD of the mean

165 165

166 166 Similar to equation for SD of mean Then we need standard error -Similar (ish) When we have standard error –Can go on to 95% CI –Significance of difference

167 167

168 168 Confidence Limits 95% CI – t dist with N - k - 1 df is 2.31 –CI = = % confidence limits

169 169 Significance of difference from zero – i.e. probability of getting result if =0 Not probability that = 0 This probability is (of course) the same as the value for the prediction

170 170 Testing the Standardised Slope (Correlation) Correlation is bounded between –1 and +1 –Does not have symmetrical distribution, except around 0 Need to transform it –Fisher z transformation – approximately normal

171 171 95% CIs –0.879 – 1.96 * 0.38 = 0.13 – * 0.38 = 1.62

172 172 Transform back to correlation 95% CIs = 0.13 to 0.92 Very wide –Because of small sample size –Maybe thats why CIs are not reported?

173 173 Using Excel Functions in excel –Fisher() – to carry out Fisher transformation –Fisherinv() – to transform back to correlation

174 174 The Others Same ideas for calculation of CIs and SEs for –Predicted score –Gives expected range of values given X Same for intercept –But we have probably had enough

175 One more tricky thing (Dont worry if you dont understand) For means, regression estimates, etc –Estimate –95% confidence intervals , –P = They match 175

176 For correlations, odds ratios, etc –No longer match 95% CIs –0.0000, P-value – Because of the sampling distribution of the mean –Does not depend on the value The sampling distribution of a proportion –Does depend on the value –More certainty around 0.9 than around

177 177 Lesson 4: Introducing Multiple Regression

178 178 Residuals We said Y = b 0 + b 1 x 1 We could have said Y i = b 0 + b 1 x i1 + e i We ignored the i on the Y And we ignored the e i –Its called error, after all But it isnt just error –Trying to tell us something

179 179 What Error Tells Us Error tells us that a case has a different score for Y than we predict –There is something about that case Called the residual –What is left over, after the model Contains information –Something is making the residual 0 –But what?


181 181

182 182 The residual (+ the mean) is the expected value of Y If all cases were equal on X It is the value of Y, controlling for X Other words: –Holding constant –Partialling –Residualising (residualised scores) –Conditioned on

183 183 Sometimes adjustment is enough on its own –Measure performance against criteria Teenage pregnancy rate –Measure pregnancy and abortion rate in areas –Control for socio-economic deprivation, religion, rural/urban and anything else important –See which areas have lower teenage pregnancy and abortion rate, given same level of deprivation Value added education tables –Measure school performance –Control for initial intake

184 Sqm PricePredictedResidual Adj Value (mean + resid)

185 185 Control? In experimental research –Use experimental control –e.g. same conditions, materials, time of day, accurate measures, random assignment to conditions In non-experimental research –Cant use experimental control –Use statistical control instead

186 186 Analysis of Residuals What predicts differences in crime rate –After controlling for socio-economic deprivation –Number of police? –Crime prevention schemes? –Rural/Urban proportions? –Something else This is (mostly) what multiple regression is about

187 187 Exam performance –Consider number of books a student read (books) –Number of lectures (max 20) a student attended (attend) Books and attend as IV, grade as outcome

188 188 First 10 cases

189 189 Use books as IV –R =0.492, F =12.1, df =1, 28, p =0.001 –b 0 =52.1, b 1 =5.7 –(Intercept makes sense) Use attend as IV –R =0.482, F =11.5, df =1, 38, p =0.002 –b 0 =37.0, b 1 =1.9 –(Intercept makes less sense)

190 190 Books Grade (100)

191 191

192 192 Problem Use R 2 to give proportion of shared variance –Books = 24% –Attend = 23% So we have explained 24% + 23% = 47% of the variance –NO!!!!!

193 193 Correlation of books and attend is (unsurprisingly) not zero –Some of the variance that books shares with grade, is also shared by attend Look at the correlation matrix BOOKS ATTEND GRADE BOOKSATTENDGRADE

194 194 I have access to 2 cars My wife has access to 2 cars –We have access to four cars? –No. We need to know how many of my 2 cars are shared Similarly with regression –But we can do this with the residuals –Residuals are what is left after (say) books –See if residual variance is explained by attend –Can use this new residual variance to calculate SS res, SS total and SS reg

195 195 Well. Almost. –This would give us correct values for SS –Would not be correct for slopes, etc Because assumes that the variables have a causal priority –Why should attend have to take what is left from books? –Why should books have to take what is left by attend? Use OLS again; take variance they share

196 196 Simultaneously estimate 2 parameters –b 1 and b 2 –Y = b 0 + b 1 x 1 + b 2 x 2 –x 1 and x 2 are IVs Shared variance Not trying to fit a line any more –Trying to fit a plane Can solve iteratively –Closed form equations better –But they are unwieldy

197 197 x1x1 x2x2 y 3D scatterplot (2points only)

198 198 x1x1 x2x2 y b0b0 b1b1 b2b2

199 199

200 Increasing Power What if the predictors dont correlate? Regression is still good –It increases the power to detect effects –(More on power later) Less variance left over When do we know the two predictors dont correlate? 200

201 201 (Really) Ridiculous Equations

202 202 The good news –There is an easier way The bad news –It involves matrix algebra The good news –We dont really need to know how to do it

203 Were not programming computers –So we usually dont care Very, very occasionally it helps to know what the computer is doing 203

204 204 Back to the Good News We can calculate the standardised parameters as B=R xx -1 x R xy Where –B is the vector of regression weights –R xx -1 is the inverse of the correlation matrix of the independent (x) variables –R xy is the vector of correlations of the correlations of the x and y variables

205 Exercise

206 Exercises Exercise 4.1 –Grades data in Excel Exercise 4.2 –Repeat in Stata Exercise 4.3 –Zero correlation Exercise 4.4 –Repeat therapy data Exercise 4.5 –PTSD in families. 206

207 207 Lesson 5: More on Multiple Regression

208 Contents More on parameter estimates –Standard errors of coefficients R, R2, adjusted R2 Extra bits –Suppressors –Decisions about control variables –Standardized estimates > 1 –Variable entry techniques 208

209 More on Parameter Estimates 209

210 210 Parameter Estimates Parameter estimates ( b 1, b 2 … b k ) were standardised –Because we analysed a correlation matrix Represent the correlation of each IV with the outcome –When all other IVs are held constant

211 211 Can also be unstandardised Unstandardised represent the unit (rather than SDs) change in the outcome associated with a 1 unit change in the IV –When all the other variables are held constant Parameters have standard errors associated with them –As with one IV –Hence t-test, and associated probability can be calculated Trickier than with one IV

212 212 Standard Error of Regression Coefficient Standardised is easier –R 2 i is the value of R 2 when all other predictors are used as predictors of that variable Note that if R 2 i = 0, the equation is the same as for previous

213 Multiple R 213

214 214 Multiple R The degree of prediction –R (or Multiple R ) –No longer equal to b R 2 Might be equal to the sum of squares of B –Only if all x s are uncorrelated

215 215 In Terms of Variance Can also think of R2 in terms of variance explained. –Each IV explains some variance in the outcome –The IVs share some of their variance Cant share the same variance twice

216 216 The total variance of Y = 1 Variance in Y accounted for by x 1 r x 1 y 2 = 0.36 Variance in Y accounted for by x 2 r x2y 2 = 0.36

217 217 In this model –R 2 = r yx r yx 2 2 –R 2 = = 0.72 –R = 0.72 = 0.85 But –If x 1 and x 2 are correlated –No longer the case

218 218 The total variance of Y = 1 Variance in Y accounted for by x 2 r x2y 2 = 0.36 Variance in Y accounted for by x 1 r x1y 2 = 0.36 Variance shared between x 1 and x 2 (not equal to r x1x2 )

219 219 So –We can no longer sum the r 2 –Need to sum them, and subtract the shared variance – i.e. the correlation But –Its not the correlation between them –Its the correlation between them as a proportion of the variance of Y Two different ways

220 220 If r x1x2 = 0 –r xy = b x 1 –Equivalent to r yx1 2 + r yx 2 2 Based on estimates

221 221 r x1x2 = 0 –Equivalent to r yx1 2 + r yx2 2 Based on correlations

222 222 Can also be calculated using methods we have seen –Based on PRE (predicted value) –Based on correlation with prediction Same procedure with >2 IVs

223 223 Adjusted R 2 R 2 is on average an overestimate of population value of R 2 –Any x will not correlate 0 with Y –Any variation away from 0 increases R –Variation from 0 more pronounced with lower N Need to correct R 2 –Adjusted R 2

224 224 1 – R 2 –Proportion of unexplained variance –We multiple this by an adjustment More variables – greater adjustment More people – less adjustment Calculation of Adj. R 2

225 225

226 226 Extra Bits Some stranger things that can happen – Counter-intuitive

227 227 Can be hard to understand –Very counter-intuitive Definition –A predictor which increases the size of the parameters associated with other predictors above the size of their correlations Suppressor variables

228 228 An example (based on Horst, 1941) –Success of trainee pilots –Mechanical ability ( x 1 ), verbal ability ( x 2 ), success ( y ) Correlation matrix

229 229 –Mechanical ability correlates 0.3 with success –Verbal ability correlates 0.0 with success –What will the parameter estimates be? –(Dont look ahead until you have had a guess)

230 230 Mechanical ability – b = 0.4 –Larger than r ! Verbal ability – b = -0.2 –Smaller than r !! So what is happening? –You need verbal ability to do the mechanical ability test –Not actually related to mechanical ability Measure of mechanical ability is contaminated by verbal ability

231 231 High mech, low verbal –High mech This is positive (.4) –Low verbal Negative, because we are talking about standardised scores (-(-.2) (.2) Your mech is really high – you did well on the mechanical test, without being good at the words High mech, high verbal –Well, you had a head start on mech, because of verbal, and need to be brought down a bit

232 232 Another suppressor? b1 = b2 = b1 = b2 =

233 233 Another suppressor? b 1 =0.26 b 2 = -0.06

234 234 And another? b 1 = b 2 =

235 235 And another? b 1 = 0.53 b 2 = -0.47

236 236 One more? b 1 = b 2 =

237 237 One more? b 1 = 0.53 b 2 = 0.47

238 238 Suppression happens when two opposing forces are happening together –And have opposite effects Dont throw away your IVs, –Just because they are uncorrelated with the outcome Be careful in interpretation of regression estimates –Really need the correlations too, to interpret what is going on –Cannot compare between studies with different predictors –Think about what you want to know Before throwing variables into the analysis

239 What to Control For? What is the added value of a better college –In terms of salary –More academic people go to better colleges –Control for: Ability? Social class? Mothers education? Parents income? Course? Ethnic group? … 239

240 Decisions about control variables –Guided from theory Effect of gender –Controlling for hair length and skirt wearing? 240

241 241

242 Do dogs make kids healthier? –What to control for? Parents weight? Yes: Obese parents are more likely to have obese kids, kids who are thinner, relative to the parents are thinner. No: Dog might make parent thinner. By controlling for parental weight, youre controlling for the effect of dog 242


244 Dog Kids health Good control vars Bad control vars

245 Dog Kids health Income Parent Weight Child Asthma Rural/Urban? House/apartment?

246 246 Standardised Estimates > 1 Correlations are bounded r –We think of standardised regression estimates as being similarly bounded But they are not –Can go >1.00, <-1.00 –R cannot, because that is a proportion of variance

247 247 Three measures of ability –Mechanical ability, verbal ability 1, verbal ability 2 –Score on science exam –Before reading on, what are the parameter estimates?

248 248 Mechanical –About where we expect Verbal 1 –Very high Verbal 2 –Very low

249 249 What is going on –Its a suppressor again –a predictor which increases the size of the parameters associated with other predictors above the size of their correlations Verbal 1 and verbal 2 are correlated so highly –They need to cancel each other out

250 250 Variable Selection What are the appropriate predictors to use in a model? –Depends what you are trying to do Multiple regression has two separate uses –Prediction –Explanation

251 251 Prediction –What will happen in the future? –Emphasis on practical application –Variables selected (more) empirically –Value free Explanation –Why did something happen? –Emphasis on understanding phenomena –Variables selected theoretically –Not value free

252 252 Visiting the doctor –Precedes suicide attempts –Predicts suicide Does not explain suicide More on causality later on … Which are appropriate variables –To collect data on? –To include in analysis? –Decision needs to be based on theoretical knowledge of the behaviour of those variables –Statistical analysis of those variables (later) Unless you didnt collect the data –Common sense (not a useful thing to say)

253 253 Variable Entry Techniques Entry-wise –All variables entered simultaneously Hierarchical –Variables entered in a predetermined order Stepwise –Variables entered according to change in R 2 –Actually a family of techniques

254 254 Entrywise regression –All variables entered simultaneously –All treated equally Hierarchical regression –Entered in a theoretically determined order –Change in R 2 is assessed, and tested for significance –e.g. sex and age Should not be treated equally with other variables Sex and age MUST be first (unchangeable) –Confused with hierarchical linear modelling (MLM)

255 R-Squared Change 255 SSE 0, df 0 SSE and df for first (smaller) model SSE 1, df 1 SSE and df for second (larger) model

256 256 Stepwise –Variables entered empirically –Variable which increases R 2 the most goes first Then the next … –Variables which have no effect can be removed from the equation Example –House prices – whats important? –Size, lot size, list price,

257 257 Stepwise Analysis –Data determines the order –Model 1: listing price, R 2 = 0.87 –Model 2: listing price + lot size, R 2 = 0.89

258 258 Hierarchical analysis –Theory determines the order –Model 1: Lot size+ House size, R 2 = –Model 2: + List price, R 2 = –Change in R 2 = 0.41, p < 0.001

259 259 Which is the best model? –Entrywise – OK –Stepwise – excluded age Excluded size –MOST IMPORTANT PREDICTOR –Hierarchical Listing price accounted for additional variance –Whoever decides the price has information that we dont Other problems with stepwise –F and df are wrong (cheats with df ) –Unstable results Small changes (sampling variance) – large differences in models

260 260 –Uses a lot of paper –Dont use a stepwise procedure to pack your suitcase

261 261 Is Stepwise Always Evil? Yes All right, no Research goal is entirely predictive (technological) –Not explanatory (scientific) –What happens, not why N is large –40 people per predictor, Cohen, Cohen, Aiken, West (2003) Cross validation takes place

262 Alternatives to stepwise regression –More recently developed –Used for genetic studies 1000s of predictors, one outcome, small samples –Least Angle Regression LARS (least angle regression) Lasso (Least absolute shrinkage and selection operator) 262

263 Entry Methods in Stata Entrywise –What regress does Hierarchical –Two ways –Use hireg –Add on module net search hireg Then install 263

264 Hierarchical Regression Use (on one line) –hireg outcome (block1var1 block1var2) (block2var1 block2var2) Hireg reports –Parameter estimates for the two regressions –R 2 for each model, change in R 2 264

265 265 Model R2 F(df) 1: (1,98) 2: (2,97) p R2 change F(df)change p (1,97) P value for the R 2 P value for the change in R 2

266 Hierarchical Regression (Cont…) I dont like hireg, for two reasons –Its different to regression –It only works for OLS regression, not logistic, multinomial, Poisson, etc Alternative 2: –Use test –The p-value associated with the change in R 2 for a variable Equal to the p-value for that variable. 266

267 Hierarchical Regression (Cont…) Example (using cars) –Parameters from final model: –hireg price () (extro) car | Coef. Std. Err. t P>|t| [95% Conf. Interval] extro | –R2 change statistics R2 change F(df) change p (1,36) –(What is relationship between t and F?) We know the p-value of the R 2 change –When there is one predictor in the block –What about when theres more than one? 267

268 Hierarchical Regression (Cont) test isnt exactly what we want –But it is the same as what we want Advantage of test –You can always use it (I can always remember how it works) 268

269 (For SPSS) SPSS calls them blocks Enter some variables, click next block –Enter more variables Click on Statistics –Click on R-squared change 269

270 Stepwise Regression Add stepwise: prefix With –Pr() – probability value to be removed from equation –Pe() – probability value to be entered into equation stepwise, pe(0.05) pr(0.2): reg price sqm lotsize originallis 270

271 271 A quick note on R 2 R 2 is sometimes regarded as the fit of a regression model –Bad idea If good fit is required – maximise R 2 –Leads to entering variables which do not make theoretical sense

272 Propensity Scores Another method of controlling for variables Ensure that predictors are uncorrelated with one predictor –Dont need to control for them 272

273 x s Uncorrelated? Two cases when x s are uncorrelated Experimental design –Predictors are uncorrelated –We randomly assigned people to conditions to ensure that was the case Sample weights –We can deliberately sample Ensure that they are uncorrelated 273

274 20 women with college degree 20 women without college degree 20 men with college degree 20 men without college degree –Or use post hoc sample weights Propensity weighting –Weight to ensure that variables are uncorrelated –Usually done to avoid having to control –E.g. ethnic differences in PTSD symptoms –Can incorporate many more control variables

275 Propensity Scores Race profiling of police stops –Same time, place, area, etc – 275

276 276 Critique of Multiple Regression Goertzel (2002) –Myths of murder and multiple regression –Skeptical Inquirer (Paper B1) Econometrics and regression are junk science –Multiple regression models (in US) –Used to guide social policy

277 277 More Guns, Less Crime –(controlling for other factors) Lott and Mustard: A 1% increase in gun ownership –3.3% decrease in murder rates But: –More guns in rural Southern US –More crime in urban North (crack cocaine epidemic at time of data)

278 278 Executions Cut Crime No difference between crimes in states in US with or without death penalty Ehrlich (1975) controlled all variables that affect crime rates –Death penalty had effect in reducing crime rate No statistical way to decide whos right

279 279 Legalised Abortion Donohue and Levitt (1999) –Legalised abortion in 1970s cut crime in 1990s Lott and Whitley (2001) –Legalising abortion decreased murder rates by … 0.5 to 7 per cent. Its impossible to model these data –Controlling for other historical events –Crack cocaine (again)

280 Crime is still dropping in the US –Despite the recession Levitt says its mysterious, because the abortion effect should be over Some suggest Xboxes, Playstations, etc Netflix, DVRs –(Violent movies reduce crime). 280

281 281 Another Critique Berk (2003) –Regression analysis: a constructive critique (Sage) Three cheers for regression –As a descriptive technique Two cheers for regression –As an inferential technique One cheer for regression –As a causal analysis

282 282 Is Regression Useless? Do regression carefully –Dont go beyond data which you have a strong theoretical understanding of Validate models –Where possible, validate predictive power of models in other areas, times, groups Particularly important with stepwise

283 283 Lesson 6: Categorical Predictors

284 284 Introduction

285 285 Introduction So far, just looked at continuous predictors Also possible to use categorical (nominal, qualitative) predictors –e.g. Sex; Job; Religion; Region; Type (of anything) Usually analysed with t-test/ANOVA

286 286 Historical Note But these (t-test/ANOVA) are special cases of regression analysis –Aspects of General Linear Models (GLMs) So why treat them differently? –Fishers fault –Computers fault Regression, as we have seen, is computationally difficult –Matrix inversion and multiplication –Cant do it, without a computer

287 287 In the special cases where: You have one categorical predictor Your IVs are uncorrelated –It is much easier to do it by partitioning of sums of squares These cases –Very rare in applied research –Very common in experimental research Fisher worked at Rothamsted agricultural research station Never have problems manipulating wheat, pigs, cabbages, etc

288 288 In psychology –Led to a split between experimental psychologists and correlational psychologists –Experimental psychologists (until recently) would not think in terms of continuous variables Still (too) common to dichotomise a variable –Too difficult to analyse it properly –Equivalent to discarding 1/3 of your data

289 289 The Approach

290 290 The Approach Recode the nominal variable –Into one, or more, variables to represent that variable Names are slightly confusing –Some texts talk of dummy coding to refer to all of these techniques –Some (most) refer to dummy coding to refer to one of them –Most have more than one name

291 291 If a variable has g possible categories it is represented by g -1 variables Simplest case: –Smokes: Yes or No –Variable 1 represents Yes –Variable 2 is redundant If it isnt yes, its no

292 292 The Techniques

293 293 We will examine two coding schemes –Dummy coding For two groups For >2 groups –Effect coding For >2 groups Look at analysis of change –Equivalent to ANCOVA –Pretest-posttest designs

294 294 Dummy Coding – 2 Groups Sometimes called simple coding A categorical variable with two groups One group chosen as a reference group –The other group is represented in a variable e.g. 2 groups: Experimental (Group 1) and Control (Group 0) –Control is the reference group –Dummy variable represents experimental group Call this variable group1

295 295 For variable group1 –1 = Yes, 0=No

296 296 Some data Group is x, score is y

297 297 Control Group = 0 –Intercept = Score on Y when x = 0 –Intercept = mean of control group Experimental Group = 1 – b = change in Y when x increases 1 unit – b = difference between experimental group and control group

298 298 Gradient of slope represents difference between means

299 299 Dummy Coding – 3+ Groups With three groups the approach is the similar g = 3, therefore g -1 = 2 variables needed 3 Groups –Control –Experimental Group 1 –Experimental Group 2

300 300 Recoded into two variables –Note – do not need a 3 rd variable If we are not in group 1 or group 2 MUST be in control group 3 rd variable would add no information (What would happen to determinant?)

301 301 F and associated p –Tests H 0 that b 1 and b 2 and associated p-values –Test difference between each experimental group and the reference group To test difference between experimental groups –Need to rerun analysis (or just do ANOVA with post-hoc tests)

302 302 One more complication –Have now run multiple comparisons –Increases – i.e. probability of type I error Need to correct for this –Bonferroni correction –Multiply given p -values by two/three (depending how many comparisons were made)

303 303 Effect Coding Usually used for 3+ groups Compares each group (except the reference group) to the mean of all groups –Dummy coding compares each group to the reference group. Example with 5 groups –1 group selected as reference group Group 5

304 304 Each group (except reference) has a variable – 1 if the individual is in that group – 0 if not –-1 if in reference group

305 305 Examples Dummy coding and Effect Coding Group 1 chosen as reference group each time Data

306 306 Dummy Groupdummy2dummy GroupEffect2effect Effect

307 307 Dummy R =0.543, F =5.7, df=2, 27, p =0.009 b 0 = 52.4, b 1 = 3.9, p =0.100 b 2 = 7.7, p =0.002 Effect R =0.543, F =5.7, df =2, 27, p =0.009 b 0 = 56.27, b 1 = 0.03, p=0.980 b 2 = 3.8, p=0.007

308 308 In Stata Use xi: prefix for dummy coding Use xi3: module for more codings But –I dont like it, I do it by hand –I dont understand what its doing –It makes very long variables And then I cant use test –BUT: If doing stepwise, you need to keep the variables together Example: xi: reg outcome contpred i.catpred Put i. in front of categorical predictors This has changed in Stata 11. xi: no longer needed

309 xi: reg salary i.job_description salary | Coef. Std. Err. t P>|t| _Ijob_desc~2 | _Ijob_desc~3 | _cons |

310 Exercise golf balls –Which is best? 310

311 311 In SPSS SPSS provides two equivalent procedures for regression –Regression –GLM –GLM will: –Automatically code categorical variables –Automatically calculate interaction terms –Allow you to not understand GLM wont: –Give standardised effects –Give hierarchical R 2 p-values

312 312 ANCOVA and Regression

313 313 Test –(Which is a trick; but its designed to make you think about it) Use bank data (Ex 5.3) –Compare the pay rise (difference between salbegin and salary) –For ethnic minority and non-minority staff What do you find?

314 314 ANCOVA and Regression Dummy coding approach has one special use –In ANCOVA, for the analysis of change Pre-test post-test experimental design –Control group and (one or more) experimental groups –Tempting to use difference score + t-test / mixed design ANOVA –Inappropriate

315 315 Salivary cortisol levels –Used as a measure of stress –Not absolute level, but change in level over day may be interesting Test at: 9.00am, 9.00pm Two groups –High stress group (cancer biopsy) Group 1 –Low stress group (no biopsy) Group 0

316 316 Correlation of AM and PM = ( p =0.008) Has there been a significant difference in the rate of change of salivary cortisol? –3 different approaches

317 317 Approach 1 – find the differences, do a t-test – t = 1.31, df =26, p =0.203 Approach 2 – mixed ANOVA, look for interaction effect –F = 1.71, df = 1, 26, p = –F = t 2 Approach 3 – regression (ANCOVA) based approach

318 318 –IVs: AM and group –outcome: PM – b 1 (group) = 3.59, standardised b 1 =0.432, p = 0.01 Why is the regression approach better? –The other two approaches took the difference –Assumes that r = 1.00 –Any difference from r = 1.00 and you add error variance Subtracting error is the same as adding error

319 319 Using regression –Ensures that all the variance that is subtracted is true –Reduces the error variance Two effects –Adjusts the means Compensates for differences between groups –Removes error variance Data is am-pm cortisol

320 320 More on Change If difference score is correlated with either pre-test or post-test –Subtraction fails to remove the difference between the scores –If two scores are uncorrelated Difference will be correlated with both Failure to control –Equal SDs, r = 0 Correlation of change and pre-score =0.707

321 321 Even More on Change A topic of surprising complexity –What I said about difference scores isnt always true Lords paradox – it depends on the precise question you want to answer –Collins and Horn (1993). Best methods for the analysis of change –Collins and Sayer (2001). New methods for the analysis of change –More later

322 322 Lesson 7: Assumptions in Regression Analysis

323 323 The Assumptions 1.The distribution of residuals is normal (at each value of the outcome). 2.The variance of the residuals for every set of values for the predictor is equal. violation is called heteroscedasticity. 3.The error term is additive no interactions. 4.At every value of the outcome the expected (mean) value of the residuals is zero No non-linear relationships

324 324 5.The expected correlation between residuals, for any two cases, is 0. The independence assumption (lack of autocorrelation) 6.All predictors are uncorrelated with the error term. 7.No predictors are a perfect linear function of other predictors (no perfect multicollinearity) 8.The mean of the error term is zero.

325 325 What are we going to do … Deal with some of these assumptions in some detail Deal with others in passing only –look at them again later on

326 326 Assumption 1: The Distribution of Residuals is Normal at Every Value of the outcome

327 327 Look at Normal Distributions A normal distribution –symmetrical, bell-shaped (so they say)

328 328 What can go wrong? Skew –non-symmetricality –one tail longer than the other Kurtosis –too flat or too peaked –kurtosed Outliers –Individual cases which are far from the distribution

329 329 Effects on the Mean Skew –biases the mean, in direction of skew Kurtosis –mean not biased –standard deviation is –and hence standard errors, and significance tests

330 330 Examining Univariate Distributions Graphs –Histograms –Boxplots –P-P plots Calculation based methods

331 331 Histograms A and B

332 332 C and D

333 333 E & F

334 334 Histograms can be tricky ….

335 335 Boxplots

336 336 P-P Plots A & B

337 337 C & D

338 338 E & F

339 339 Skew and Kurtosis statistics Outlier detection statistics Calculation Based

340 340 Skew and Kurtosis Statistics Normal distribution –skew = 0 –kurtosis = 0 Two methods for calculation –Fishers and Pearsons –Very similar answers Associated standard error –can be used for significance (t-test) of departure from normality –not actually very useful Never normal above N = 400

341 341

342 342 Outlier Detection Calculate distance from mean –z-score (number of standard deviations) –deleted z-score that case biased the mean, so remove it –Look up expected distance from mean 1% 3+ SDs

343 343 Non-Normality in Regression

344 344 Effects on OLS Estimates The mean is an OLS estimate The regression line is an OLS estimate Lack of normality –biases the position of the regression slope –makes the standard errors wrong probability values attached to statistical significance wrong

345 345 Checks on Normality Check residuals are normally distributed –Draw histogram residuals Use regression diagnostics –Lots of them –Most arent very interesting

346 346 Regression Diagnostics Residuals –Standardised, studentised-deleted –look for cases > |3| (?) Influence statistics –Look for the effect a case has –If we remove that case, do we get a different answer? –DFBeta, Standardised DFBeta changes in b

347 347 –DfFit, Standardised DfFit change in predicted value Distances –measures of distance from the centroid –some include IV, some dont

348 348 More on Residuals Residuals are trickier than you might have imagined Raw residuals –OK Standardised residuals –Residuals divided by SD

349 349 Standardised / Studentised Now we can calculate the standardised residuals –SPSS calls them studentised residuals –Also called internally studentised residuals

350 350 Deleted Studentised Residuals Studentised residuals do not have a known distribution –Cannot use them for inference Deleted studentised residuals –Externally studentised residuals –Studentized (jackknifed) residuals Distributed as t With df = N – k – 1

351 351 Testing Significance We can calculate the probability of a residual –Is it sampled from the same population BUT –Massive type I error rate –Bonferroni correct it Multiply p value by N

352 352 Bivariate Normality We didnt just say residuals normally distributed We said at every value of the outcomes Two variables can be normally distributed – univariate, –but not bivariate

353 353 Couples IQs –male and female –Seem reasonably normal

354 354 But wait!!

355 355 When we look at bivariate normality –not normal – there is an outlier So plot X against Y OK for bivariate –but – may be a multivariate outlier –Need to draw graph in 3+ dimensions –cant draw a graph in 3 dimensions But we can look at the residuals instead …

356 356 IQ histogram of residuals

357 357 Multivariate Outliers … Will be explored later in the exercises So we move on …

358 358 What to do about Non- Normality Skew and Kurtosis –Skew – much easier to deal with –Kurtosis – less serious anyway Transform data –removes skew –positive skew – log transform –negative skew - square

359 359 Transformation May need to transform IV and/or outcome –More often outcome time, income, symptoms (e.g. depression) all positively skewed –can cause non-linear effects (more later) if only one is transformed –alters interpretation of unstandardised parameter –May alter meaning of variable Some people say that this is such a big problem –Never transform –May add / remove non-linear and moderator effects

360 360 Change measures –increase sensitivity at ranges avoiding floor and ceiling effects Outliers –Can be tricky –Why did the outlier occur? Error? Delete them. Weird person? Probably delete them Normal person? Tricky.

361 361 –You are trying to model a process is the data point outside the process e.g. lottery winners, when looking at salary yawn, when looking at reaction time –Which is better? A good model, which explains 99% of your data? (because we threw outliers out) A poor model, which explains all of it (because we keep outliers in) I prefer a good model

362 More on House Prices tracks and predicts house prices –In the USA Sometimes detects outliers –We dont trust this selling price –We havent used it 362

363 Example in Stata reg salary educ predict res, res hist res gen logsalary= log(salary) reg logsalary educ predict logres, res hist logres 363



366 But … Parameter estimates change Interpretation of parameter estimate is different Exercise 7.0,

367 Bootstrapping Bootstrapping is very, very cool And very, very clever But very, very simple 367

368 Bootstrapping When we estimate a test statistic (F or r or t or 2 ) We rely on knowing the sampling distribution Which we know –If the distributional assumptions are satisfied 368

369 Estimate the Distribution Bootstrapping lets you: –Skip the bit about distribution –Estimate the sampling distribution from the data This shouldnt be allowed –Hence bootstrapping –But it is 369

370 How to Bootstrap We resample, with replacement Take our sample –Sample 1 individual Put that individual back, so that they can be sampled again –Sample another individual Keep going until weve sampled as many people as were in the sample Analyze the data Repeat the process B times –Where B is a big number 370

371 Example 371 Original B B B

372 Analyze each dataset –Sampling distribution of statistic Gives sampling distribution 2 approaches to CI or P Semi-parametric –Calculate standard error of statistic –Call that the standard deviation –Does not make assumption about distribution of data Makes assumption about sampling distribution 372

373 Non-parametric –Stata calls this percentile Count. –If you have 1000 samples –25 th is lower CI –975 th is upper CI –P-value is proportion that cross zero Non-parametric needs more samples 373

374 Bootstrapping in Stata Very easy: –Use bootstrap: (or bs: or bstrap: ) prefix or –(Better) use vce(bootstrap) option By default does 50 samples –Not enough –Use reps() –At least

375 Example reg salary salbegin educ, vce(bootstrap, reps(50)) | Observed Bootstrap | Coef. Std. Err. z salbegin | Again salbegin |

376 More Reps 1,000 reps –Z = Again –Z = ,000 reps –17.23 –

377 377 Exercise 7.2, 7.3

378 378 Assumption 2: The variance of the residuals for every set of values for the predictor is equal.

379 379 Heteroscedasticity This assumption is a about heteroscedasticity of the residuals –Hetero=different –Scedastic = scattered We dont want heteroscedasticity –we want our data to be homoscedastic Draw a scatterplot to investigate

380 380

381 381 Only works with one IV –need every combination of IVs Easy to get – use predicted values –use residuals there Plot predicted values against residuals A bit like turning the scatterplot to make the line of best fit flat

382 382 Good – no heteroscedasticity

383 383 Bad – heteroscedasticity

384 384 Testing Heteroscedasticity Whites test 1.Do regression, save residuals. 2.Square residuals 3.Square IVs 4.Calculate interactions of IVs –e.g. x 1x 2, x 1x 3, x 2 x 3

385 385 5.Run regression using –squared residuals as outcome –IVs, squared IVs, and interactions as IVs 6.Test statistic = N x R 2 –Distributed as 2 –Df = k (for second regression) Use education and salbegin to predict salary (employee data.sav) –R 2 = 0.113, N=474, 2 = 53.5, df=5, p < Automatic in Stata –estat imtest, white

386 386 Plot of Predicted and Residual

387 Whites Test as Test of Interest Possible to have a theory that predicts heteroscedasticity Lupien, et al, 2006 –Heteroscedasticity in relationship of hippocampal volume and age 387

388 388 Magnitude of Heteroscedasticity Chop data into 5 slices –Calculate variance of each slice –Check ratio of smallest to largest –Less than 5 OK

389 gen slice = 1 replace slice = 2 if pred > replace slice = 3 if pred > replace slice = 4 if pred > replace slice = 5 if pred > bysort slice: su pred 1: : (Doesnt look too bad, thanks to skew in predictors) 389

390 390 Dealing with Heteroscedasticity Use Huber-White (robust) estimates –Also called sandwich estimates –Also called empirical estimates Use survey techniques –Relatively straightforward in SAS and Stata, fiddly in SPSS –Google: SPSS Huber-White

391 SE can be calculated with: Sandwich estimator: Whys it a Sandwich? 391

392 Example reg salary educ –Standard errors: –204, 2821 reg salary educ, robust –Standard errors: –267 –3347 SEs usually go up, can go down 392

393 393 Heteroscedasticity – Implications and Meanings Implications What happens as a result of heteroscedasticity? –Parameter estimates are correct not biased –Standard errors (hence p-values) are incorrect

394 394 However … If there is no skew in predicted scores –P-values a tiny bit wrong If skewed, –P-values can be very wrong Exercise 7.4

395 Robust SE Haiku T-stat looks too good. Use robust standard errors significance gone 395

396 396 Meaning What is heteroscedasticity trying to tell us? –Our model is wrong – it is misspecified –Something important is happening that we have not accounted for e.g. amount of money given to charity ( given ) –depends on: earnings degree of importance person assigns to the charity ( import )

397 397 Do the regression analysis –R 2 = 0.60,, p < seems quite good –b 0 = 0.24, p =0.97 –b 1 = 0.71, p < –b 2 = 0.23, p = Whites test – 2 = 18.6, df =5, p =0.002 The plot of predicted values against residuals …

398 398 Plot shows heteroscedastic relationship

399 399 Which means … –the effects of the variables are not additive –If you think that what a charity does is important you might give more money how much more depends on how much money you have

400 400

401 401 One more thing about heteroscedasticity –it is the equivalent of homogeneity of variance in ANOVA/t-tests

402 Exercise 7.4, 7.5,

403 403 Assumption 3: The Error Term is Additive

404 404 Additivity What heteroscedasticity shows you –effects of variables need to be additive (assume no interaction between the variables) Heteroscedasticity doesnt always show it to you –can test for it, but hard work –(same as homogeneity of covariance assumption in ANCOVA) Have to know it from your theory A specification error

405 405 Additivity and Theory Two IVs –Alcohol has sedative effect A bit makes you a bit tired A lot makes you very tired –Some painkillers have sedative effect A bit makes you a bit tired A lot makes you very tired –A bit of alcohol and a bit of painkiller doesnt make you very tired –Effects multiply together, dont add together

406 406 If you dont test for it –Its very hard to know that it will happen So many possible non-additive effects –Cannot test for all of them –Can test for obvious In medicine –Choose to test for salient non-additive effects –e.g. sex, race More on this, when we look at moderators

407 Exercise 7.6 Exercise

408 408 Assumption 4: At every value of the outcome the expected (mean) value of the residuals is zero

409 409 Linearity Relationships between variables should be linear –best represented by a straight line Not a very common problem in social sciences –measures are not sufficiently accurate (much measurement error) to make a difference R 2 too low unlike, say, physics

410 410 Relationship between speed of travel and fuel used

411 411 R 2 = –looks pretty good –know speed, make a good prediction of fuel BUT –look at the chart –if we know speed we can make a perfect prediction of fuel used –R 2 should be 1.00

412 412 Detecting Non-Linearity Residual plot –just like heteroscedasticity Using this example –very, very obvious –usually pretty obvious

413 413 Residual plot

414 414 Linearity: A Case of Additivity Linearity = additivity along the range of the IV Jeremy rides his bicycle harder –Increase in speed depends on current speed –Not additive, multiplicative –MacCallum and Mar (1995). Distinguishing between moderator and quadratic effects in multiple regression. Psychological Bulletin.

415 415 Assumption 5: The expected correlation between residuals, for any two cases, is 0. The independence assumption (lack of autocorrelation)

416 416 Independence Assumption Also: lack of autocorrelation Tricky one –often ignored –exists for almost all tests All cases should be independent of one another –knowing the value of one case should not tell you anything about the value of other cases

417 417 How is it Detected? Can be difficult –need some clever statistics (multilevel models) Better off avoiding situations where it arises –Or handling it when it does arise Residual Plots

418 418 Residual Plots Were data collected in time order? –If so plot ID number against the residuals –Look for any pattern Test for linear relationship Non-linear relationship Heteroscedasticity

419 419

420 420 How does it arise? Two main ways time-series analyses –When cases are time periods weather on Tuesday and weather on Wednesday correlated inflation 1972, inflation 1973 are correlated clusters of cases –patients treated by three doctors –children from different classes –people assessed in groups

421 421 Why does it matter? Standard errors can be wrong –therefore significance tests can be wrong Parameter estimates can be wrong –really, really wrong –from positive to negative An example –students do an exam (on statistics) –choose one of three questions IV: time outcome: grade

422 422 Result, with line of best fit

423 423 Result shows that –people who spent longer in the exam, achieve better grades BUT … –we havent considered which question people answered –we might have violated the independence assumption outcome will be autocorrelated Look again –with questions marked

424 424 Now somewhat different

425 425 Now, people that spent longer got lower grades –questions differed in difficulty –do a hard one, get better grade –if you can do it, you can do it quickly

426 Dealing with Non- Independence For time series data –Time series analysis (another course) –Multilevel models (hard, some another course) For clustered data –Robust standard errors –Generalized estimating equations –Multilevel models 426

427 Cluster Robust Standard Errors Predictor: School size Outcome: Grades Sample: –20 schools –20 children per school What is the N? 427

428 Robust Standard Errors Sample is: –400 children – is it 400? –Not really Each child adds information First child in a school adds lots of information about that school –100 th child in a school adds less information –How much less depends on how similar the children in the school are –20 schools Its more than

429 Robust SE in Stata Very easy reg predictor outcome, robust cluster(clusterid) BUT –Only to be used where clustering is a nuisance only Only adjusts standard errors, not parameter estimates Only to be used where parameter estimates shouldnt be affected by clustering 429

430 Example of Robust SE Effects of incentives for attendance at adult literacy class –Some students rewarded for attendance –Others not rewarded 152 classes randomly assigned to each condition –Scores measured at mid term and final 430

431 Example of Robust SE Naïve –reg postscore tx midscore – Est: SE: Clustered –reg postscore tx midscore, robust cluster(classid) –Est: SE

432 Problem with Robust Estimates Only corrects standard error –Does not correct estimate Other predictors must be uncorrelated with predictors of group membership –Or estimates wrong Two alternatives: –Generalized estimating equations (gee) –Multilevel models 432

433 Independence + Heteroscedasticity Assumption is that residuals are: –Independently and identically distributed i.i.d. Same procedure used for both problems –Really, same problem 433

434 Exercise 7.9, exercise

435 435 Assumption 6: All predictor variables are uncorrelated with the error term.

436 436 Uncorrelated with the Error Term A curious assumption –by definition, the residuals are uncorrelated with the predictors (try it and see, if you like) There are no other predictors that are important –That correlate with the error –i.e. Have an effect

437 437 Problem in economics –Demand increases supply –Supply increases wages –Higher wages increase demand OLS estimates will be (badly) biased in this case –need a different estimation procedure –two-stage least squares simultaneous equation modelling –Instrumental variables

438 Another Haiku Supply and demand: without a good instrument, not identified. 438

439 439 Assumption 7: No predictors are a perfect linear function of other predictors no perfect multicollinearity

440 440 No Perfect Multicollinearity IVs must not be linear functions of one another –matrix of correlations of IVs is not positive definite –cannot be inverted –analysis cannot proceed Have seen this with –age, age start, time working (cant have all three in the model) –also occurs with subscale and total in model at the same time

441 441 Large amounts of collinearity –a problem (as we shall see) sometimes –not an assumption Exercise 7.11

442 442 Assumption 8: The mean of the error term is zero. You will like this one.

443 443 Mean of the Error Term = 0 Mean of the residuals = 0 That is what the constant is for –if the mean of the error term deviates from zero, the constant soaks it up - note, Greek letters because we are talking about population values

444 444 Can do regression without the constant –Usually a bad idea –E.g R 2 = 0.995, p < Looks good

445 445

446 446 Lesson 8: Issues in Regression Analysis Things that alter the interpretation of the regression equation

447 447 The Four Issues Causality Sample sizes Collinearity Measurement error

448 448 Causality

449 449 What is a Cause? Debate about definition of cause –some statistics (and philosophy) books try to avoid it completely –We are not going into depth just going to show why it is hard Two dimensions of cause –Ultimate versus proximal cause –Determinate versus probabilistic

450 450 Proximal versus Ultimate Why am I here? –I walked here because –This is the location of the class because –Eric Tanenbaum asked me because –(I dont know) –because I was in my office when he rang because –I was a lecturer at Derby University because –I saw an advert in the paper because

451 451 –I exist because –My parents met because –My father had a job … Proximal cause –the direct and immediate cause of something Ultimate cause –the thing that started the process off –I fell off my bicycle because of the bump –I fell off because I was going too fast

452 452 Determinate versus Probabilistic Cause Why did I fall off my bicycle? –I was going too fast –But every time I ride too fast, I dont fall off –Probabilistic cause Why did my tyre go flat? –A nail was stuck in my tyre –Every time a nail sticks in my tyre, the tyre goes flat –Deterministic cause

453 453 Can get into trouble by mixing them together –Eating deep fried Mars Bars and doing no exercise are causes of heart disease –My Grandad ate three deep fried Mars Bars every day, and the most exercise he ever got was when he walked to the shop next door to buy one –(Deliberately?) confusing deterministic and probabilistic causes

454 454 Criteria for Causation Association (correlation) Direction of Influence (a b) Isolation (not c a and c b)

455 455 Association Correlation does not mean causation –we all know But –Causation does mean correlation Need to show that two things are related –may be correlation –may be regression when controlling for third (or more) factor

456 456 Relationship between price and sales –suppliers may be cunning –when people want it more stick the price up – So – no relationship between price and sales

457 457 –Until (or course) we control for demand –b 1 (Price) = –b 2 (Demand) = 0.94 But which variables do we enter?

458 458 Direction of Influence Relationship between A and B –three possible processes ABABAB C A causes B B causes A C causes A & B

459 459 How do we establish the direction of influence? –Longitudinally? Storm Barometer Drops – Now if we could just get that barometer needle to stay where it is … Where the role of theory comes in (more on this later)

460 460 Isolation Isolate the outcome from all other influences –as experimenters try to do Cannot do this –can statistically isolate the effect –using multiple regression

461 461 Role of Theory Strong theory is crucial to making causal statements Fisher said: to make causal statements make your theories elaborate. –dont rely purely on statistical analysis Need strong theory to guide analyses –what critics of non-experimental research dont understand

462 462 S.J. Gould – a critic –says correlate price of petrol and his age, for the last 10 years –find a correlation –Ha! (He says) that doesnt mean there is a causal link –Of course not! (We say). No social scientist would do that analysis without first thinking (very hard) about the possible causal relations between the variables of interest Would control for time, prices, etc …

463 463 Atkinson, et al. (1996) –relationship between college grades and number of hours worked –negative correlation –Need to control for other variables – ability, intelligence Gould says Most correlations are non- causal (1982, p243) –Of course!!!!

464 464 I drink a lot of beer 120 non-causal correlations 16 causal relations karaoke jokes (about statistics) children wake early bathroom headache sleeping equations (beermat) laugh thirsty fried breakfast no beer curry chips falling over lose keys curtains closed

465 465 Abelson (1995) elaborates on this –method of signatures A collection of correlations relating to the process –the signature of the process e.g. tobacco smoking and lung cancer –can we account for all of these findings with any other theory?

466 466 1.The longer a person has smoked cigarettes, the greater the risk of cancer. 2.The more cigarettes a person smokes over a given time period, the greater the risk of cancer. 3.People who stop smoking have lower cancer rates than do those who keep smoking. 4.Smokers cancers tend to occur in the lungs, and be of a particular type. 5.Smokers have elevated rates of other diseases. 6.People who smoke cigars or pipes, and do not usually inhale, have abnormally high rates of lip cancer. 7.Smokers of filter-tipped cigarettes have lower cancer rates than other cigarette smokers. 8.Non-smokers who live with smokers have elevated cancer rates. (Abelson, 1995: )

467 467 –In addition, should be no anomalous correlations If smokers had more fallen arches than non- smokers, not consistent with theory Failure to use theory to select appropriate variables –specification error –e.g. in previous example –Predict wealth from price and sales increase price, price increases Increase sales, price increases

468 468 Sometimes these are indicators of the process, not the process itself –e.g. barometer – stopping the needle wont help –e.g. inflation? Indicator or cause of economic health?

469 469 No Causation without Experimentation Blatantly untrue –I dont doubt that the sun shining makes us warm Why the aversion? –Pearl (2000) says problem is that there is no mathematical operator (e.g. =) –No one realised that you needed one –Until you build a robot

470 470 AI and Causality A robot needs to make judgements about causality Needs to have a mathematical representation of causality –Suddenly, a problem! –Doesnt exist Most operators are non-directional Causality is directional

471 471 Sample Sizes How many subjects does it take to run a regression analysis?

472 472 Introduction Social scientists dont worry enough about the sample size required –Why didnt you get a significant result? –I didnt have a large enough sample Not a common answer, but very common reason More recently awareness of sample size is increasing –use too few – no point doing the research –use too many – waste their time

473 473 Research funding bodies Ethical review panels –both become more interested in sample size calculations We will look at two approaches –Rules of thumb (quite quickly) –Power Analysis (more slowly)

474 474 Rules of Thumb Lots of simple rules of thumb exist –10 cases per IV –and at least 100 cases –Green (1991) more sophisticated To test significance of R 2 – N = k To test significance of slopes, N = k Rules of thumb dont take into account all the information that we have –Power analysis does

475 475 Power Analysis Introducing Power Analysis Hypothesis test –tells us the probability of a result of that magnitude occurring, if the null hypothesis is correct (i.e. there is no effect in the population) Doesnt tell us –the probability of that result, if the null hypothesis is false (i.e., there actually is an effect in the population)

476 476 According to Cohen (1982) all null hypotheses are false –everything that might have an effect, does have an effect it is just that the effect is often very tiny

477 477 Type I Errors Type I error is false rejection of H 0 Probability of making a type I error – – the significance value cut-off usually 0.05 (by convention) Always this value Not affected by –sample size –type of test

478 478 Type II errors Type II error is false acceptance of the null hypothesis –Much, much trickier We think we have some idea –we almost certainly dont Example –I do an experiment (random sampling, all assumptions perfectly satisfied) –I find p = 0.05

479 479 –You repeat the experiment exactly different random sample from same population –What is probability you will find p < 0.05? –Answer: 0.5 –Another experiment, I find p = 0.01 –Probability you find p < 0.05? –Answer: 0.79 Very hard to work out –not intuitive –need to understand non-central sampling distributions (more in a minute)

480 480 Probability of type II error = beta ( ) –same as population regression parameter (to be confusing) Power = 1 – Beta –Probability of getting a significant result (given that there is a significant result to be found)

481 481 Type I error p = Type II error p = power = 1 - H 0 true (we find no effect – p > 0.05) H 0 false (we find an effect – p < 0.05) Research Findings H 0 false (effect to be found) H 0 True (no effect to be found) State of the World

482 482 Four parameters in power analysis – – prob. of Type I error – – prob. of Type II error (power = 1 – ) –Effect size – size of effect in population –N Know any three, can calculate the fourth –Look at them one at a time

483 483 Probability of Type I error –Usually set to 0.05 –Somewhat arbitrary sometimes adjusted because of circumstances –rarely because of power analysis –May want to adjust it, based on power analysis

484 484 – Probability of type II error –Power (probability of finding a result) = 1 – –Standard is 80% Some argue for 90% –Implication that Type I error is 4 times more serious than type II error adjust ratio with compromise power analysis

485 485 Effect size in the population –Most problematic to determine –Three ways 1.What effect size would be useful to find? R 2 = no use (probably) 2.Base it on previous research –what have other people found? 3.Use Cohens conventions –small R 2 = 0.02 –medium R 2 = 0.13 –large R 2 = 0.26

486 486 –Effect size usually measured as f 2 –For R 2

487 487 –For (standardised) slopes –Where sr 2 is the contribution to the variance accounted for by the variable of interest –i.e. sr 2 = R 2 (with variable) – R 2 (without) change in R 2 in hierarchical regression

488 488 N – the sample size –usually use other three parameters to determine this –sometimes adjust other parameters ( ) based on this –e.g. You can have 50 participants. No more.

489 489 Doing power analysis With power analysis program –SamplePower, Gpower (free), Nquery –With Stata command sampsi Which I find very confusing But well use it anyway

490 sampsi Limited in usefulness –A categorical, two group predictor sampsi 0 0.5, pre(1) r01(0.5) n1(50) sd(1) –Find power for detecting an effect of 0.5 When theres one other variable at baseline Which correlates people in each group When sd is

491 sampsi … Method: ANCOVA relative efficiency = adjustment to sd = adjusted sd1 = Estimated power: power =

492 GPower Better for regression designs 492



495 495 Underpowered Studies Research in the social sciences is often underpowered –Why? –See Paper B11 – the persistence of underpowered studies

496 496 Extra Reading Power traditionally focuses on p values –What about CIs? –Paper B8 – Obtaining regression coefficients that are accurate, not simply significant

497 Exercise

498 498 Collinearity

499 499 Collinearity as Issue and Assumption Collinearity (multicollinearity) –the extent to which the predictors are (multiply) correlated If R 2 for any IV, using other IVs = 1.00 –perfect collinearity –variable is linear sum of other variables –regression will not proceed –(SPSS will arbitrarily throw out a variable)

500 500 R 2 < 1.00, but high –other problems may arise Four things to look at in collinearity –meaning –implications –detection –actions

501 501 Meaning of Collinearity Literally co-linearity –lying along the same line Perfect collinearity –when some IVs predict another –Total = S1 + S2 + S3 + S4 –S1 = Total – (S2 + S3 + S4) –rare

502 502 Less than perfect –when some IVs are close to predicting other IVs –correlations between IVs are high (usually, but not always) high multiple correlations

503 503 Implications Effects the stability of the parameter estimates –and so the standard errors of the parameter estimates –and so the significance and CIs Because –shared variance, which the regression procedure doesnt know where to put

504 504 Sex differences –due to genetics? –due to upbringing? –(almost) perfect collinearity statistically impossible to tell

505 505 When collinearity is less than perfect –increases variability of estimates between samples –estimates are unstable –reflected in the variances, and hence standard errors

506 506 Detecting Collinearity Look at the parameter estimates –large standardised parameter estimates (>0.3?), which are not significant be suspicious Run a series of regressions –each IV as outcome –all other IVs as IVs for each IV

507 507 Sounds like hard work? –SPSS does it for us! Ask for collinearity diagnostics –Tolerance – calculated for every IV –Variance Inflation Factor sq. root of amount s.e. has been increased

508 508 Actions What you can do about collinearity no quick fix (Fox, 1991) 1.Get new data avoids the problem address the question in a different way e.g. find people who have been raised as the wrong gender exist, but rare Not a very useful suggestion

509 509 2.Collect more data not different data, more data collinearity increases standard error (se) se decreases as N increases get a bigger N 3.Remove / Combine variables If an IV correlates highly with other IVs Not telling us much new If you have two (or more) IVs which are very similar e.g. 2 measures of depression, socio- economic status, achievement, etc

510 510 sum them, average them, remove one Many measures use principal components analysis to reduce them 3.Use stepwise regression (or some flavour of) See previous comments Can be useful in theoretical vacuum 4.Ridge regression not very useful behaves weirdly

511 Exercise 8.2, 8.3,

512 512 Measurement Error

513 513 What is Measurement Error In social science, it is unlikely that we measure any variable perfectly –measurement error represents this imperfection We assume that we have a true score – T A measure of that score –x

514 514 just like a regression equation –standardise the parameters –T is the reliability the amount of variance in x which comes from T but, like a regression equation –assume that e is random and has mean of zero –more on that later

515 515 Simple Effects of Measurement Error Lowers the measured correlation –between two variables Real correlation –true scores ( x * and y *) Measured correlation –measured scores ( x and y )

516 516 x*x* y*y* yx e Reliability of y r yy Reliability of x r xx True correlation of x and y r x*y* Measured correlation of x and y r xy e

517 517 Attenuation of correlation Attenuation corrected correlation

518 518 Example

519 519 Complex Effects of Measurement Error Really horribly complex Measurement error reduces correlations –reduces estimate of –reducing one estimate increases others –because of effects of control –combined with effects of suppressor variables –exercise to examine this

520 520 Dealing with Measurement Error Attenuation correction –very dangerous –not recommended Avoid in the first place –use reliable measures –dont discard information dont categorise Age: 10-20, 21-30, …

521 521 Complications Assume measurement error is –additive –linear Additive –e.g. weight – people may under-report / over- report at the extremes Linear –particularly the case when using proxy variables

522 522 e.g. proxy measures –Want to know effort on childcare, count number of children 1 st child is more effort than 19 th child –Want to know financial status, count income 1 st £1 much greater effect on financial status than the 1,000,000 th.

523 Exercise

524 524 Lesson 9: Non-Linear Analysis in Regression

525 525 Introduction Non-linear effect occurs –when the effect of one predictor –is not consistent across the range of the IV Assumption is violated –expected value of residuals = 0 –no longer the case

526 526 Some Examples

527 527 Experience Skill A Learning Curve

528 528 Arousal Performance Yerkes-Dodson Law of Arousal

529 529 Time Enthusiastic Enthusiasm Levels over a Lesson on Regression 03.5 Suicidal

530 530 Learning –line changed direction once Yerkes-Dodson –line changed direction once Enthusiasm –line changed direction twice

531 531 Everything is Non-Linear Every relationship we look at is non- linear, for two reasons –Exam results cannot keep increasing with reading more books Linear in the range we examine –For small departures from linearity Cannot detect the difference Non-parsimonious solution

532 532 Non-Linear Transformations

533 533 Bending the Line Non-linear regression is hard –We cheat, and linearise the data Do linear regression Transformations We need to transform the data –rather than estimating a curved line which would be very difficult may not work with OLS –we can take a straight line, and bend it –or take a curved line, and straighten it back to linear (OLS) regression

534 534 We still do linear regression –Linear in the parameters –Y = b 1 x + b 2 x 2 + … Can do non-linear regression –Non-linear in the parameters –Y = b 1 x + b 2 x2 + … Much trickier –Statistical theory either breaks down OR becomes harder

535 535 Linear transformations –multiply by a constant –add a constant –change the slope and the intercept

536 536 x y y=x y=2x y=x + 3

537 537 Linear transformations are no use –alter the slope and intercept –dont alter the standardised parameter estimate Non-linear transformation –will bend the slope –quadratic transformation y = x 2 –one change of direction

538 538 –Cubic transformation y = x 2 + x 3 –two changes of direction

539 539 To estimate a non-linear regression –we dont actually estimate anything non- linear –we transform the x -variable to a non-linear version –can estimate that straight line –represents the curve –we dont bend the line, we stretch the space around the line, and make it flat

540 540 Detecting Non-linearity

541 541 Draw a Scatterplot Draw a scatterplot of y plotted against x –see if it looks a bit non-linear –e.g. Education and beginning salary from bank data with line of best fit

542 542 A Real Example Starting salary and years of education –From employee data.sav

543 543 Expected value of error (residual) is > 0 Expected value of error (residual) is < 0

544 544 Use Residual Plot Scatterplot is only good for one variable –use the residual plot (that we used for heteroscedasticity) Good for many variables

545 545 We want –points to lie in a nice straight sausage

546 546 We dont want –a nasty bent sausage

547 547 Educational level and starting salary

548 548 Carrying Out Non-Linear Regression

549 549 Linear Transformation Linear transformation doesnt change –interpretation of slope –standardised slope –se, t, or p of slope –R 2 Can change –effect of a transformation

550 550 Actually more complex –with some transformations can add a constant with no effect (e.g. quadratic) With others does have an effect –inverse, log Sometimes it is necessary to add a constant –negative numbers have no square root –0 has no log

551 551 Education and Salary Linear Regression Saw previously that the assumption of expected errors = 0 was violated Anyway … –R 2 = 0.401, p < –salbegin = educ –Standardised b 1 (educ) = –Both parameters make sense