A word on correlation/estimability

Slides:



Advertisements
Similar presentations
SPM short course – Mai 2007 Linear Models and Contrasts
Advertisements

Week 2 – PART III POST-HOC TESTS. POST HOC TESTS When we get a significant F test result in an ANOVA test for a main effect of a factor with more than.
Basis Functions. What’s a basis ? Can be used to describe any point in space. e.g. the common Euclidian basis (x, y, z) forms a basis according to which.
2nd level analysis – design matrix, contrasts and inference
2nd level analysis – design matrix, contrasts and inference
Non-orthogonal regressors: concepts and consequences
1 st Level Analysis: design matrix, contrasts, GLM Clare Palmer & Misun Kim Methods for Dummies
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Outline What is ‘1st level analysis’? The Design matrix
The General Linear Model Or, What the Hell’s Going on During Estimation?
Classical inference and design efficiency Zurich SPM Course 2014
Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2009.
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
Statistical Inference
Chapter 4 Multiple Regression.
Ordinary least squares regression (OLS)
1st level analysis: basis functions and correlated regressors
Parametric modulation, temporal basis functions and correlated regressors Mkael Symmonds Antoinette Nicolle Methods for Dummies 21 st January 2008.
Linear regression models in matrix terms. The regression function in matrix terms.
Separate multivariate observations
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
1st Level Analysis Design Matrix, Contrasts & Inference
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
Contrasts and Basis Functions Hugo Spiers Adam Liston.
Some matrix stuff.
With a focus on task-based analysis and SPM12
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
A vector space containing infinitely many vectors can be efficiently described by listing a set of vectors that SPAN the space. eg: describe the solutions.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Rules for Means and Variances. Rules for Means: Rule 1: If X is a random variable and a and b are constants, then If we add a constant a to every value.
FMRI GLM Analysis 7/15/2014Tuesday Yingying Wang
Analysis of Covariance (ANCOVA)
Contrasts & Statistical Inference
Essential Question How do you add and subtract polynomials?
1 B IVARIATE AND MULTIPLE REGRESSION Estratto dal Cap. 8 di: “Statistics for Marketing and Consumer Research”, M. Mazzocchi, ed. SAGE, LEZIONI IN.
General Linear Model.
General Linear Model and fMRI Rachel Denison & Marsha Quallo Methods for Dummies 2007.
Statistical Inference Christophe Phillips SPM Course London, May 2012.
FMRI Modelling & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Chicago, Oct.
The General Linear Model
1 st level analysis: Design matrix, contrasts, and inference Stephane De Brito & Fiona McNabe.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Contrasts & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, October 2008.
SPM and (e)fMRI Christopher Benjamin. SPM Today: basics from eFMRI perspective. 1.Pre-processing 2.Modeling: Specification & general linear model 3.Inference:
1. Refresher on the general linear model, interactions, and contrasts UCL Linguistics workshop on mixed-effects modelling in R May 2016.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
Stats Methods at IC Lecture 3: Regression.
The simple linear regression model and parameter estimation
Statistical Parametric
Week 2 – PART III POST-HOC TESTS.
General Linear Model & Classical Inference
Statistical Inference
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
A very dumb dummy thinks about modelling, contrasts, and basis functions. ?
Statistical Inference
SPM short course at Yale – April 2005 Linear Models and Contrasts
Statistical Inference
The SPM MfD course 12th Dec 2007 Elvina Chu
Sam Ereira Methods for Dummies 13th January 2016
Contrasts & Statistical Inference
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Rachel Denison & Marsha Quallo
Statistical Inference
SPM short course – May 2009 Linear Models and Contrasts
Contrasts & Statistical Inference
Chapter 3 General Linear Model
Statistical Inference
Contrasts & Statistical Inference
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Presentation transcript:

A word on correlation/estimability If any column of X is a linear combination of any others (X is rank deficient), some parameters cannot be estimated uniquely (inestimable) … which means some contrasts cannot be tested (eg, only if sum to zero) This has implications for whether “baseline” (constant term) is explicitly or implicitly modelled rank(X)=2 A B A+B  cm = [1 0 0] cd = [1 -1 0]  A B “implicit” cm = [1 0] cd = [1 -1]  b1 = 1.6 b2 = 0.7 cd*b = [1 -1]*b = 0.9 A A+B “explicit” b1 = 0.9 b2 = 0.7 cd = [1 0]  cd*b = [1 0]*b = 0.9

A word on correlation/estimability If any column of X is a linear combination of any others (X is rank deficient), some parameters cannot be estimated uniquely (inestimable) … which means some contrasts cannot be tested (eg, only if sum to zero) This has implications for whether “baseline” (constant term) is explicitly or implicitly modelled (rank deficiency might be thought of as perfect correlation…) rank(X)=2 A B A+B  cm = [1 0 0] cd = [1 -1 0]  A B “implicit” A A+B “explicit” T = 1 1 0 1 X(1) * T = X(2) c(1) * T = c(2) [ 1 -1 ] * = [ 1 0 ] 1 1 0 1

A word on correlation/estimability When there is high (but not perfect) correlation between regressors, parameters can be estimated… …but the estimates will be inefficient estimated (ie highly variable) …meaning some contrasts will not lead to very powerful tests A B A+B  cm = [1 0 0] cd = [1 -1 0]  A B A+B convolved with HRF! cm = [1 0 0] cd = [1 -1 0]  () SPM shows pairwise correlation between regressors, but this will NOT tell you that, eg, X1+X2 is highly correlated with X3… … so some contrasts can still be inefficient, even though pairwise correlations are low

A word on orthogonalisation To remove correlation between two regressors, you can explicitly orthogonalise one (X1) with respect to the other (X2): X1^ = X1 – (X2X2+)X1 (Gram-Schmidt) Paradoxically, this will NOT change the parameter estimate for X1, but will for X2 In other words, the parameter estimate for the orthogonalised regressor is unchanged! This reflects fact that parameter estimates automatically reflect orthogonal component of each regressor… …so no need to orthogonalise, UNLESS you have a priori reason for assigning common variance to the other regressor Y X2 X1 b2^ X1^ b2 b1