Presentation is loading. Please wait.

Presentation is loading. Please wait.

General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN.

Similar presentations


Presentation on theme: "General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN."— Presentation transcript:

1 General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN

2 Observed data Preprocessing... Intensity Time Y Y is a matrix of BOLD signals: Each column represents a single voxel sampled at successive time points.

3 Univariate analysis GLM in two steps: Does an analysis of variance separately at each voxel (univariate) Makes t statistic from the results of this analysis, for each voxel

4 Example X can contain values quantifying experimental variable YXYX

5 Parameters & error this line is a 'model' of the data slope β = 0.23 Intercept c = 54.5 β: slope of line relating x to y ‘how much of x is needed to approximate y?’ ε = residual error the best estimate of β minimises ε: deviations from line Assumed to be independently, identically and normally distributed Y = βx + c + ε

6 Multiple Regression Simple regression Multiple regression (more than one predictor/regressor /beta) y = β 1 * x 1 + β 2 * x 2 + c + ε

7 Matrix Formulation Write out equation for each observation of variable Y from 1 to J: Y 1 = X 11 β 1 +…+X 1l β l +…+ X 1L β L + ε 1 Y j = X j1 β 1 +…+X jl β l +…+ X jL β L + ε j Y J = X J1 β 1 +…+X Jl β l +…+ X JL β L + ε J Y1YjYJY1YjYJ = X 11 … X 1l … X 1L X j1 … X 1l … X 1L X 11 … X 1l … X 1L Can turn these simultaneous equations into matrix form to get a single equation: β1βjβJβ1βjβJ + ε1εjεJε1εjεJ Y = X x β + ε Observed dataDesign MatrixParametersResiduals/Error Y = X. β + ε

8 GLM and fMRI Y = X. β + ε Observed data: Y is the BOLD signal at various time points at a single voxel Design matrix: Several components which explain the observed data, i.e. the BOLD time series for the voxel Parameters: Define the contribution of each component of the design matrix to the value of Y Estimated so as to minimise the error, ε, i.e. least sums of squares Error: Difference between the observed data, Y, and that predicted by the model, Xβ.

9 Design Matrix Matrix represents values of X Different columns = different predictors x 1 x 2 c

10 Parameter estimation e = Y – Ỹ = Y - X β S = Σ j J e j 2 = e T e = (Y - X β ) T (Y - X β ) The least square estimates are the parameter estimates which minimize the residual sum of squares  find derivative and solve for ∂S/∂β = 0  β = (X T X) -1 X T Y (if (X T X) is invertible) Matlab magic: >> B = inv(X) * Y

11 Statistical inference A beta value is estimated for each column in design matrix Test if the slope is significantly different from zero (null hypothesis) t-statistic = beta / standard error of the slope Many betas → contrasts (contents of another talk…) t-tests or F-tests depending on nature of question

12 Continuous predictors X can contain values quantifying experimental variable YXYX

13 Binary predictors X can contain values distinguishing experimental conditions YXYX

14 Covariates vs. conditions Covariates: parametric modulation of independent variable e.g. task-difficulty 1 to 6 Conditions: 'dummy' codes identify different levels of experimental factor e.g. integers 0 or 1: 'off' or 'on' on off off on

15 Ways to improve your model: modelling haemodynamics Brain does not just switch on and off! Reshape (convolve) regressors to resemble HRF HRF basic function Original HRF Convolved

16 Ways to improve your model: model everything Important to model all known variables, even if not experimentally interesting: e.g. head movement, block and subject effects  minimise residual error variance for better stats  effects-of-interest are the regressors you’re actually interested in subjects global activity or movement conditions: effects of interest

17 fMRI characteristics which may increase error Variable gain & scanner drift Variations of signal amplitude with every volume and between scanning sessions Proportional & Grand-mean scaling of data High-pass filtering in design matrix Serial temporal correlations breathing, heartbeat: activity at one time point correlates with other times -> adjust error term

18 Summary The General Linear Model allows you to find the parameters, β, which provide the best fit with your data, Y The optimal parameters estimates, β, are found by minimising the Sums of Squares differences between your predicted model and the observed data The design matrix in SPM contains the information about the factors, X, which may explain the observed data Once we have obtained the βs at each voxel we can use these to do various statistical tests

19 Previous MfD talks: Elliot Freeman (2005), Davina Bristow and Beatriz Calvo (2004) http://www.fil.ion.ucl.ac.uk/spm/doc/books/hbf2/pdfs/Ch7.pdf http://www.mrc-cbu.cam.ac.uk/Imaging/Common/spmstats.shtml Thanks to…

20 Summary Y = X. β + ε Observed data: SPM uses a mass univariate approach – that is each voxel is treated as a separate column vector of data. Y is the BOLD signal at various time points at a single voxel Design matrix: Several components which explain the observed data, i.e. the BOLD time series for the voxel Timing info: onset vectors, O m j, and duration vectors, D m j HRF, h m, describes shape of the expected BOLD response over time Other regressors, e.g. realignment parameters Parameters: Define the contribution of each component of the design matrix to the value of Y Estimated so as to minimise the error, ε, i.e. least sums of squares Error: Difference between the observed data, Y, and that predicted by the model, Xβ. Not assumed to be spherical in fMRI


Download ppt "General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN."

Similar presentations


Ads by Google