Presentation is loading. Please wait.

Presentation is loading. Please wait.

General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL

Similar presentations


Presentation on theme: "General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL"— Presentation transcript:

1 General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL http://www.bestmannlab.com Slides by C Phillips

2 Overview Introduction –ERP example General Linear Model –Definition & design matrix –Parameter estimation & interpretation –Contrast & inference –Correlated regressors Conclusion

3 Overview Introduction –ERP example General Linear Model –Definition & design matrix –Parameter estimation & interpretation –Contrast & inference –Correlated regressors Conclusion

4 Pre-processing: Converting Filtering Resampling Re-referencing Epoching Artefact rejection Time-frequency transformation … General Linear Model Raw EEG/MEG data Overview of SPM Image convertion Design matrix Parameter estimates Inference & correction for multiple comparisonsContrast: c’ = [-1 1] Statistical Parametric Map (SPM)

5 Statistical Parametric Maps mm time mm time frequency 3D M/EEG source reconstruction, fMRI, VBM 2D time-frequency 2D+t scalp-time 1D time time

6 ERP example Random presentation of ‘ faces ’ and ‘ scrambled faces ’ 70 trials of each type 128 EEG channels Question: is there a difference between the ERP of ‘ faces ’ and ‘ scrambled faces ’ ?

7 ERP example: channel B9 2-sample t-test: compare size of effect to its error standard deviation Focus on N170

8 Overview Introduction –ERP example General Linear Model –Definition & design matrix –Parameter estimation & interpretation –Contrast & inference –Correlated regressors Conclusion

9 Data modeling =   + + Error FacesScrambledData  =   ++ XX XX Y

10 Design matrix =+  =  +XY Data vector Design matrix Parameter vector Error vector  

11 N : # trials p : # regressors General Linear Model Y Y += X GLM defined by design matrix X error distribution, e.g.

12 General Linear Model The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds. Applied to all channels & time points Mass-univariate parametric analysis –one sample t-test –two sample t-test –paired t-test –Analysis of Variance (ANOVA) –factorial designs –correlation –linear regression –multiple regression

13 Estimate parameters such thatminimal Residuals: Parameter estimation =+   If iid. error assumed: Ordinary Least Squares parameter estimate

14 Design space defined by X y e x1x1 x2x2 Residual forming matrix R Projection matrix P OLS estimates Geometric perspective

15 Mass-univariate analysis Voxel-wise GLM: Evoked response image Time 1.Transform data for all subjects and conditions 2.SPM: Analyse data at each voxel Sensor to voxel transform

16 Hypothesis Testing The Null Hypothesis H 0 Typically what we want to disprove (i.e. no effect). Alternative Hypothesis H A = outcome of interest.

17 Contrast : specifies linear combination of parameter vector: c´  ERP: faces < scrambled ? = - 1 x   + 1 x   > 0 ? = ^ ^ test H 0 : c´   > 0 ? ^ T = contrast of estimated parameters variance estimate T = s 2 c ’ (X ’ X) + c c’  c’  ^ Contrast & t-test   <   ? (   : estimation of   ) = ^^ ^ c’ = -1 +1 SPM-t over time & space

18 Hypothesis Testing The Null Hypothesis H 0 Typically what we want to disprove (i.e. no effect). Alternative Hypothesis H A = outcome of interest. The Test Statistic T summarises evidence about H 0. (typically) small in magnitude when H 0 is true and large when false. know the distribution of T under the null hypothesis. Null Distribution of T

19 t P-val Null Distribution of T  uu Hypothesis Testing Significance level α: Acceptable false positive rate α.  threshold u α, controls the false positive rate Observation of test statistic t, a realisation of T  Conclusion about the hypothesis: reject H 0 in favour of H a if t > u α  P-value: summarises evidence against H 0. = chance of observing value more extreme than t under H 0.

20 Contrast & T-test, a few remarks Contrasts = simple linear combinations of the betas T-test = signal-to-noise measure (ratio of estimate to standard deviation of estimate). T-statistic, NO dependency on scaling of the regressors or contrast Unilateral test: H0:H0: vs. H A :

21 Model comparison: Full vs. Reduced model? Null Hypothesis H 0 : True model is X 0 (reduced model) Test statistic: ratio of explained and unexplained variability (error) 1 = rank(X) – rank(X 0 ) 2 = N – rank(X) RSS RSS 0 Full model ? X1X1 X0X0 Or reduced model? X0X0 Extra-sum-of-squares & F-test

22 F-test & multidimensional contrasts Tests multiple linear hypotheses: H 0 : True model is X 0 Full or reduced model? X 1 (  3-4 ) X0X0 X0X0 0 0 1 0 0 0 0 1 c T = H 0 :  3 =  4 = 0 test H 0 : c T  = 0 ?

23 x1x1 x2x2 x2*x2* y x 2 orthogonalized w.r.t. x 1 only the parameter estimate for x 1 changes, not that for x 2 ! Correlated regressors explained variance shared between regressors Correlated and orthogonal regressors

24 Orthogonal regressors Variability in Y

25 Correlated regressors Shared variance Variability in Y

26 Correlated regressors Variability in Y

27 Correlated regressors Variability in Y

28 Correlated regressors Variability in Y

29 Correlated regressors Variability in Y

30 Correlated regressors Variability in Y

31 Inference & correlated regressors implicitly test for an additional effect only –be careful if there is correlation –orthogonalisation = decorrelation (not generally needed)  parameters and test on the non modified regressor change always simpler to have orthogonal regressors by design use F-tests in case of correlation, to see the overall significance. There is generally no way to decide to which regressor the « common » part should be attributed to. original regressors may not matter: it ’ s the contrast you are testing which should be as decorrelated as possible from the rest of the design matrix

32 Overview Introduction –ERP example General Linear Model –Definition & design matrix –Parameter estimation & interpretation –Contrast & inference –Correlated regressors Conclusion

33 1.Decompose data into effects and error 2.Form statistic using estimates of effects (of interest) and error Make inferences about effects of interest Why? How? Use any available knowledge Model? Modelling? Contrast: e.g. [1 -1 ] Contrast: e.g. [1 -1 ] model effects estimate error estimate statistic data Experimental effects

34 Thank you for your attention! Any question? Thanks to Christophe, Klaas, Guillaume, Rik, Will, Stefan, Andrew & Karl for the slides!


Download ppt "General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL"

Similar presentations


Ads by Google