General Linear Model & Classical Inference

Slides:



Advertisements
Similar presentations
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Advertisements

1 st Level Analysis: design matrix, contrasts, GLM Clare Palmer & Misun Kim Methods for Dummies
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Outline What is ‘1st level analysis’? The Design matrix
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Classical inference and design efficiency Zurich SPM Course 2014
Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2009.
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics April 2010 Klaas Enno Stephan Laboratory for Social and Neural.
Chapter Seventeen HYPOTHESIS TESTING
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
The General Linear Model (GLM)
Statistical Inference
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and.
Classical (frequentist) inference Methods & models for fMRI data analysis 11 March 2009 Klaas Enno Stephan Laboratory for Social and Neural System Research.
Classical (frequentist) inference Methods & models for fMRI data analysis 22 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for.
1st level analysis: basis functions and correlated regressors
Lorelei Howard and Nick Wright MfD 2008
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
General Linear Model & Classical Inference
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
Brain Mapping Unit The General Linear Model A Basic Introduction Roger Tait
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
Classical (frequentist) inference Methods & models for fMRI data analysis November 2012 With many thanks for slides & images to: FIL Methods group Klaas.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
FMRI GLM Analysis 7/15/2014Tuesday Yingying Wang
Contrasts & Statistical Inference
Statistical Inference Christophe Phillips SPM Course London, May 2012.
FMRI Modelling & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Chicago, Oct.
The General Linear Model
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, May 2012.
SPM short – Mai 2008 Linear Models and Contrasts Stefan Kiebel Wellcome Trust Centre for Neuroimaging.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Contrasts & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, October 2008.
The general linear model and Statistical Parametric Mapping II: GLM for fMRI Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline.
The General Linear Model Christophe Phillips SPM Short Course London, May 2013.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, October 2012.
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
Methods of Presenting and Interpreting Information Class 9.
The General Linear Model (GLM)
The General Linear Model (GLM)
The general linear model and Statistical Parametric Mapping
The General Linear Model
Statistical Inference
Design Matrix, General Linear Modelling, Contrasts and Inference
Statistical Inference
SPM short course at Yale – April 2005 Linear Models and Contrasts
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
Statistical Inference
The General Linear Model (GLM)
Statistics review Basic concepts: Variability measures Distributions
Contrasts & Statistical Inference
The General Linear Model
I. Statistical Tests: Why do we use them? What do they involve?
The general linear model and Statistical Parametric Mapping
Statistics II: An Overview of Statistics
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
SPM short course – May 2009 Linear Models and Contrasts
Psych 231: Research Methods in Psychology
Contrasts & Statistical Inference
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
The General Linear Model
The General Linear Model
Contrasts & Statistical Inference
Presentation transcript:

General Linear Model & Classical Inference London, SPM-M/EEG course May 2012 C. Phillips, Cyclotron Research Centre, ULg, Belgium http://www.cyclotron.ulg.ac.be

Overview Introduction General Linear Model Conclusion ERP example Definition & design matrix Parameter estimation & interpretation Contrast & inference Correlated regressors Conclusion

Overview Introduction General Linear Model Conclusion ERP example Definition & design matrix Parameter estimation & interpretation Contrast & inference Correlated regressors Conclusion

Inference & correction for multiple comparisons Overview of SPM Statistical Parametric Map (SPM) Raw EEG/MEG data Design matrix Pre-processing: Converting Filtering Resampling Re-referencing Epoching Artefact rejection Time-frequency transformation … General Linear Model Inference & correction for multiple comparisons Parameter estimates Image convertion Contrast: c’ = [-1 1]

ERP example Random presentation of ‘faces’ and ‘scrambled faces’ 70 trials of each type 128 EEG channels Question: is there a difference between the ERP of ‘faces’ and ‘scrambled faces’?

compares size of effect to its error standard deviation ERP example: channel B9 Focus on N170 compares size of effect to its error standard deviation

Overview Introduction General Linear Model Conclusion ERP example Definition & design matrix Parameter estimation & interpretation Contrast & inference Correlated regressors Conclusion

Data modeling = b1 + b2 +  = b2 b1 + X2 X1 Y • Error Data Faces Scrambled = b1 + b2 + Error  = b2 b1 + X2 X1 Y •

Design matrix b2 b1 = +  = b + X Y • Parameter vector Design matrix Data vector Design matrix Error vector b2 b1 = +  = b + X Y •

X + = Y General Linear Model GLM defined by design matrix X N: # trials p: # regressors GLM defined by design matrix X error distribution, e.g.

General Linear Model The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds. Applied to all channels & time points Mass-univariate parametric analysis one sample t-test two sample t-test paired t-test Analysis of Variance (ANOVA) factorial designs correlation linear regression multiple regression

Ordinary Least Squares parameter estimate Parameter estimation b2 b1 = + Residuals: If iid. error assumed: Ordinary Least Squares parameter estimate Estimate parameters such that minimal

Hypothesis Testing The Null Hypothesis H0 Typically what we want to disprove (i.e. no effect).  Alternative Hypothesis HA = outcome of interest.

b1 < b2 ? (bi : estimation of bi ) contrast of estimated parameters Contrast & t-test Contrast : specifies linear combination of parameter vector: c´ SPM-t over time & space c’ = -1 +1 ERP: faces < scrambled ? = b1 < b2 ? (bi : estimation of bi ) = ^ -1xb1 + 1xb2 > 0 ? = ^ test H0 : c´ ´ b > 0 ? ^ T = contrast of estimated parameters variance estimate s2c’(X’X)+c c’ b ^

Hypothesis Testing The Null Hypothesis H0 The Test Statistic T Typically what we want to disprove (i.e. no effect).  Alternative Hypothesis HA = outcome of interest. The Test Statistic T summarises evidence about H0. (typically) small in magnitude when H0 is true and large when false.  know the distribution of T under the null hypothesis. Null Distribution of T

Hypothesis Testing Significance level α: u Acceptable false positive rate α.  threshold uα, controls the false positive rate Observation of test statistic t, a realisation of T  Conclusion about the hypothesis: reject H0 in favour of Ha if t > uα  P-value: summarises evidence against H0. = chance of observing value more extreme than t under H0. Null Distribution of T  u t P-val Null Distribution of T

Contrast & T-test, a few remarks Contrasts = simple linear combinations of the betas T-test = signal-to-noise measure (ratio of estimate to standard deviation of estimate). T-statistic, NO dependency on scaling of the regressors or contrast Unilateral test: H0: vs. HA:

Extra-sum-of-squares & F-test Model comparison: Full vs. Reduced model? Null Hypothesis H0: True model is X0 (reduced model) X1 X0 Or reduced model? X0 Test statistic: ratio of explained and unexplained variability (error) RSS RSS0 1 = rank(X) – rank(X0) 2 = N – rank(X) Full model ?

F-test & multidimensional contrasts Tests multiple linear hypotheses: H0: True model is X0 0 0 1 0 0 0 0 1 cT = H0: b3 = b4 = 0 test H0 : cTb = 0 ? X0 X1 (b3-4) X0 Full or reduced model?

Correlated and orthogonal regressors y x2 x2* x1 Correlated regressors  explained variance shared between regressors x2 orthogonalized w.r.t. x1  only the parameter estimate for x1 changes, not that for x2!

Inference & correlated regressors implicitly test for an additional effect only be careful if there is correlation orthogonalisation = decorrelation (not generally needed)  parameters and test on the non modified regressor change always simpler to have orthogonal regressors and therefore designs. use F-tests in case of correlation, to see the overall significance. There is generally no way to decide to which regressor the « common » part should be attributed to. original regressors may not matter: it’s the contrast you are testing which should be as decorrelated as possible from the rest of the design matrix

Overview Introduction General Linear Model Conclusion ERP example Definition & design matrix Parameter estimation & interpretation Contrast & inference Correlated regressors Conclusion

Modelling? Why? How? Model? Make inferences about effects of interest Decompose data into effects and error Form statistic using estimates of effects (of interest) and error How? Model? Use any available knowledge Contrast: e.g. [1 -1 ] model effects estimate error estimate statistic data Experimental effects

Thank you for your attention! Any question? Thanks to Klaas, Guillaume, Rik, Will, Stefan, Andrew & Karl for the borrowed slides!