Presentation is loading. Please wait.

Presentation is loading. Please wait.

A word on correlation/estimability

Similar presentations


Presentation on theme: "A word on correlation/estimability"— Presentation transcript:

1 A word on correlation/estimability
If any column of X is a linear combination of any others (X is rank deficient), some parameters cannot be estimated uniquely (inestimable) … which means some contrasts cannot be tested (eg, only if sum to zero) This has implications for whether “baseline” (constant term) is explicitly or implicitly modelled rank(X)=2 A B A+B cm = [ ] cd = [ ] A B “implicit” cm = [1 0] cd = [1 -1] b1 = 1.6 b2 = 0.7 cd*b = [1 -1]*b = 0.9 A A+B “explicit” b1 = 0.9 b2 = 0.7 cd = [1 0] cd*b = [1 0]*b = 0.9

2 A word on correlation/estimability
If any column of X is a linear combination of any others (X is rank deficient), some parameters cannot be estimated uniquely (inestimable) … which means some contrasts cannot be tested (eg, only if sum to zero) This has implications for whether “baseline” (constant term) is explicitly or implicitly modelled (rank deficiency might be thought of as perfect correlation…) rank(X)=2 A B A+B cm = [ ] cd = [ ] A B “implicit” A A+B “explicit” T = 1 1 0 1 X(1) * T = X(2) c(1) * T = c(2) [ 1 -1 ] * = [ 1 0 ] 1 1 0 1

3 A word on correlation/estimability
When there is high (but not perfect) correlation between regressors, parameters can be estimated… …but the estimates will be inefficient estimated (ie highly variable) …meaning some contrasts will not lead to very powerful tests A B A+B cm = [ ] cd = [ ] A B A+B convolved with HRF! cm = [ ] cd = [ ] () SPM shows pairwise correlation between regressors, but this will NOT tell you that, eg, X1+X2 is highly correlated with X3… … so some contrasts can still be inefficient, even though pairwise correlations are low

4 A word on orthogonalisation
To remove correlation between two regressors, you can explicitly orthogonalise one (X1) with respect to the other (X2): X1^ = X1 – (X2X2+)X1 (Gram-Schmidt) Paradoxically, this will NOT change the parameter estimate for X1, but will for X2 In other words, the parameter estimate for the orthogonalised regressor is unchanged! This reflects fact that parameter estimates automatically reflect orthogonal component of each regressor… …so no need to orthogonalise, UNLESS you have a priori reason for assigning common variance to the other regressor Y X2 X1 b2^ X1^ b2 b1


Download ppt "A word on correlation/estimability"

Similar presentations


Ads by Google