Presentation on theme: "Non-orthogonal regressors: concepts and consequences"— Presentation transcript:
1 Non-orthogonal regressors: concepts and consequences
2 overview Problem of non-orthogonal regressors Concepts: orthogonality and uncorrelatednessSPM (1st level):covariance matrixdetrendinghow to deal with correlated regressorsExample
3 design matrixregressorsScan numberEach column in your design matrix represents 1) events of interest or 2) a measure that may confound your results. Column = regressorThe optimal linear combination of all these columns attempts to explain as much variance in your dependent variable (the BOLD signal) as possible
4 = + + error 1 2 x1 x2 e Time BOLD signal Source: spm course 2010, Stephan
5 The beta’s are estimated on a voxel-by-voxel basis high beta means regressor explains much of BOLD signal’s variance (i.e. strongly covaries with signal)
6 Problem of non-orthogonal regressors Ytotal variance in BOLD signal
7 Orthogonal regressors YX2=+X1total variance in BOLD signalX1X2every regressor explains unique part of the variance in the BOLD signal
8 Orthogonal regressors YX2=+X1total variance in BOLD signalX1X2There is only 1 optimal linear combination of both regressors to explain as much variance as possible. Assigned beta’s will be as large as possible, stats using these beta’s will have optimal power
9 non-orthogonal regressors YX1X2=+Regressor 1 & 2 are not orthogonal. Part of the explained variance can be accounted for by both regressors and is assigned to neither. Therefore, betas for both regressors will be suboptimal
10 Entirely non-orthogonal X1X2total variance in BOLD signalregressor 2regressor 1=+Betas can’t be estimated. Variance can not be assigned to one or the other
11 “It is always simpler to have orthogonal regressors and therefore designs.“ (spm course 2010)
12 orthogonalityRegressors can be seen as vectors in n-dimensional space, wheren = number of scans.Suppose now n = 2r1 r21 22 112r1r2
13 orthogonality Two vectors are orthogonal if raw vectors have inner product == 0angle between vectors == 90°cosine of angle == 0Inner product:r1 • r2 = (1 * 2) + (2 * 1) = 4θ = acos(4 / (|r1| * |r2|) = about 35 degrees12r1r235
15 orthogonality & uncorrelatedness An aside on these two conceptsOrthogonal is defined as: X’Y = 0(inner product of two raw vectors = 0)Uncorrelated is defined as: (X – mean(X))’(Y – mean(Y)) = 0 (inner product of two detrended vectors = 0)Vectors can be orthogonal while being correlated, and vice versa!
16 Orthogonal because: Inner product 1*5 + -5*1 + 3*1 + -1*3 = 0 please read Rodgers et al. (1984) Linearly independent, orthogonal and uncorrelated variables. The American Statistician, 38: Will be in the FAM folder as wellOrthogonal because:Inner product1*5 + -5*1 + 3*1 + -1*3 = 0Inner product van orthogonal example geven, laten zien dat ongecorreleerd is.Figuur laten zien waar vectoren orthogonal zijn, en na detrending gecorreleerd zijn. Laat switchen van vectoren zien
17 Orthogonal, but correlated! 3.75 6.75 -5.25 -0.25 please read Rodgers et al. (1984) Linearly independent, orthogonal and uncorrelated variables. The American Statistician, 38: Will be in the FAM folder as wellDetrend:Mean(X) = -0.5Mean(Y) = 2.5X_det Y_det==================Mean(X_det) = 0Mean(Y_det) = 0Inner product: 5Orthogonal, but correlated!3.756.75-5.25-0.25Inner product van orthogonal example geven, laten zien dat ongecorreleerd is.Figuur laten zien waar vectoren orthogonal zijn, en na detrending gecorreleerd zijn. Laat switchen van vectoren zien
19 orthogonality & uncorrelatedness Q: So should my regressors be uncorrelated or orthogonal?A: When building your SPM.mat (i.e. running your jobfile) all regressors are detrended (except the grand mean scaling regressor). This is why orthogonal and uncorrelated are both used when talking about regressorsupdate: it is unclear whether all regressors are detrended when building an SPM.mat. This seems to be the case, but recent SPM mailing list activity suggests detrending might not take place in versions newer than SPM99.Donders batch?Include from Guillaume stating this is not the case since SPM99Explain people talk about orthogonal in the case of uncorrelatedness, and I will do so from now on as well“effectively there has been a change between SPM99 and SPM2 such that regressors were mean-centered in SPM99 but they are not any more (this is regressed out by the constant term anyway).” Link
20 Your regressors correlate Despite scrupulous design, your regressors likely still correlate to some extentThis causes beta estimates to be lower than they could beYou can see correlations using review SPM.mat Design design orthogonality
22 For detrended data, the cosine of the angle (black = 1, white = 0) between two regressors is the same as the correlation r !orthogonal vectors cos(90) = 0 r = 0 r2 = 0correlated vector cos(81) = r = r2 =r2 indicates how much variance is common between the two vectors (2.56% in this example). Note: -1 ≤ r ≤ 1 and 0 ≤ r2 ≤ 1
23 Correlated regressors: variance from single regressor to shared
24 Correlated regressors: variance from single regressor to shared t-test uses beta, determined by amount of variance explained by single regressor.
25 Correlated regressors: variance from single regressor to shared t-test uses beta, determined by amount of variance explained by single regressor.Large shared variance: low statistical power
26 Correlated regressors: variance from single regressor to shared t-test uses beta, determined by amount of variance explained by single regressor.Large shared variance: low statistical powerNot necessarily a problem if you do not intend to test these two regressors!Movement regressor 1Movement regressor 2
27 How to deal with correlated regressors? Strong correlations between regressors are not necessarily a problem. What is relevant is correlation between contrasts of interest relative to the rest of the design matrixExample: lights on vs lights off. If movement regressors correlate with these conditions (contrast of interest not orthogonal to rest of design matrix), there is a problem.If nuisance regressors only correlate with each other, no problem!Grand mean scaling is not centered around 0 (i.e. not detrended), these correlations are not informative
28 But what about the fact that SPM book says that correlation between contrast and rest of design matrix matters (and not all regressors of interest vs each other and versus movement regressors?A: it is a problem assuming you will test all of your regressors of interest. Then all contrasts will not be orthogonal to the rest of the design matrix
29 How to deal with correlations between contrast and rest of design matrix? Orthogonalize regressor A wrt regressor B: all shared variance will now be assigned to B.
31 total variance in BOLD signal orthogonalityr1r2total variance in BOLD signalregressor 1regressor 212
32 How to deal with correlations between contrast and rest of design matrix? Orthogonalize regressor A wrt regressor B: all shared variance will now be assigned to B.Only permissible given a priori reason to do this: hardly ever the case
33 How to deal with correlations between contrast and rest of design matrix? do an F-test to test overall significance of your model. For example, to see if adding a regressor will significantly improve your model. Shared variance is taken along to determine significance then.In the case where a number of regressors represent the same manipulation (e.g. switch activity, convolved with different hrfs) you can serially orthogonalize the regressors before estimating betas.
34 Example how not to do it: 2 types of trials: gain and lossVoon et al. (2010) Mechanisms underlying dopamine-mediated reward bias in compulsive behaviors. Neuron
35 Example how not to do it: 4 regressors:Gain predicted outcomePositive prediction error (gain trials)Loss predicted outcomeNegative prediction error (loss trials)Highly correlated!Highly correlated!Highly correlated because they simply are, ESPECIALLY when no jitter is introducedVoon et al. (2010) Mechanisms underlying dopamine-mediated reward bias in compulsive behaviors. Neuron
36 Example how not to do it: Performed 6 separate analyses (GLMs)Shared variance is attributed to single regressor in all GLMsAmazing! Similar patterns of activation!Voon et al. (2010) Mechanisms underlying dopamine-mediated reward bias in compulsive behaviors. Neuron
37 Take home messagesIf regressors correlate, explained variance in your BOLD signal will be assigned to neither, which reduces power on t-testsIf you orthogonalize regressor A with respect to regressor B, values of A will be changed and A will have equal uniquely explained variance. B, the unchanged variable, will come to explain all variance shared by A and B. However, don’t do this unless you have a valid reason.Orthogonality and uncorrelatedness are only the same thing if your data is centered around 0 (detrended, spm_detrend)SPM does (NOT?) detrend your regressors the moment you go from job.mat to SPM.mat
38 Interesting readsCombines SPM book and Rik Henson’s own attempt at explaining design efficiency and the issue of correlated regressors.Rodgers et al. (1984) Linearly independent, orthogonal and uncorrelated variables. The American Statistician, 38:15-minute read that describes three basic concepts in statistics/algebra
40 x y x y -3 3 3 6 0 -6 6 -3 3 3 9 6 Same vectors, but detrended: -3 30 -63 3Raw vectors:x y3 66 -39 6Inner product: 54Non-orthogonalinner product:uncorrelated But! Include example here where raw vectors are orthogonal, but after detrending (which is what spm does) the vectors are correlated.