Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overview of SPM RealignmentSmoothing Normalisation General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix.

Similar presentations


Presentation on theme: "Overview of SPM RealignmentSmoothing Normalisation General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix."— Presentation transcript:

1 Overview of SPM RealignmentSmoothing Normalisation General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix Template Kernel Gaussian field theory p <0.05 Statisticalinference

2 The General Linear Model (GLM) With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Frederike Petzschner Translational Neuromodeling Unit (TNU) Institute for Biomedical Engineering, University of Zurich & ETH Zurich

3 Image a very simple experiment… time One session 7 cycles of rest and listening Blocks of 6 scans with 7 sec TR

4 time Time single voxel time series Image a very simple experiment… Question: Is there a change in the BOLD response between listening and rest? What we know. What we measure.

5 time Time single voxel time series Image a very simple experiment… Question: Is there a change in the BOLD response between listening and rest? What we know. What we measure.

6 linear model effects estimate error estimate statistic You need a model of your data… Question: Is there a change in the BOLD response between listening and rest?

7 BOLD signal Time = error x1x1 x2x2 e Explain your data… as a combination of experimental manipulation,confounds and errors Single voxel regression model: regresso r

8 BOLD signal Time = error x1x1 x2x2 e Single voxel regression model: Explain your data… as a combination of experimental manipulation,confounds and errors

9 n =+ + error e n p p 1 n 1 n: number of scans p: number of regressors The black and white version in SPM Designmatrix error

10 The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds. Talk: Experimental Design Wed 9:45 – 10:45 Model assumptions Designmatrix error You want to estimate your parameters such that you minimize: This can be done using an Ordinary least squares estimation (OLS) assuming an i.i.d. error:

11 error GLM assumes identical and independently distributed errors i.i.d. = error covariance is a scalar multiple of the identity matrix: Cov(e) = 2 I non-identity non-independence t1 t2 t1 t2

12 = + y X Option 1: Per hand How to fit the model and estimate the parameters? error

13 = + y X How to fit the model and estimate the parameters? error Data predicted by our model Error between predicted and actual data Goal is to determine the betas such that we minimize the quadratic error OLS (Ordinary Least Squares)

14 The goal is to minimize the quadratic error between data and model

15 OLS (Ordinary Least Squares) The goal is to minimize the quadratic error between data and model

16 OLS (Ordinary Least Squares) This is a scalar and the transpose of a scalar is a scalar The goal is to minimize the quadratic error between data and model

17 OLS (Ordinary Least Squares) This is a scalar and the transpose of a scalar is a scalar The goal is to minimize the quadratic error between data and model

18 OLS (Ordinary Least Squares) This is a scalar and the transpose of a scalar is a scalar You find the extremum of a function by taking its derivative and setting it to zero The goal is to minimize the quadratic error between data and model

19 OLS (Ordinary Least Squares) This is a scalar and the transpose of a scalar is a scalar You find the extremum of a function by taking its derivative and setting it to zero The goal is to minimize the quadratic error between data and model

20 y e Design space defined by X x1x1 x2x2 A geometric perspective on the GLM OLS estimates

21 x1x1 x2x2 x2*x2* y Correlated and orthogonal regressors When x 2 is orthogonalized with regard to x 1, only the parameter estimate for x 1 changes, not that for x 2 ! Correlated regressors = explained variance is shared between regressors Design space defined by X

22 linear model effects estimate error estimate statistic We are nearly there… = +

23 What are the problems? 1. BOLD responses have a delayed and dispersed form. 2.The BOLD signal includes substantial amounts of low- frequency noise. 3.The data are serially correlated (temporally autocorrelated) this violates the assumptions of the noise model in the GLM Design Error

24 The response of a linear time-invariant (LTI) system is the convolution of the input with the system's response to an impulse (delta function). Problem 1: Shape of BOLD response

25 Solution: Convolution model of the BOLD response expected BOLD response = input function impulse response function (HRF) HRF blue = data green = predicted response, taking convolved with HRF red = predicted response, NOT taking into account the HRF

26 Problem 2: Low frequency noise blue = data black = mean + low-frequency drift green = predicted response, taking into account low-frequency drift red = predicted response, NOT taking into account low-frequency drift

27 Problem 2: Low frequency noise blue = data black = mean + low-frequency drift green = predicted response, taking into account low-frequency drift red = predicted response, NOT taking into account low-frequency drift Linear model

28 discrete cosine transform (DCT) set Solution 2: High pass filtering

29 Problem 3: Serial correlations non-identity non-independence t1 t2 t1 t2 i.i.d n: number of scans n n

30 Problem 3: Serial correlations n: number of scans n n autocovariance function with 1 st order autoregressive process: AR(1)

31 Problem 3: Serial correlations Pre-whitening: 1. Use an enhanced noise model with multiple error covariance components, i.e. e ~ N(0, 2 V) instead of e ~ N(0, 2 I). 2. Use estimated serial correlation to specify filter matrix W for whitening the data. This is i.i.d

32 How do we define W ? Enhanced noise model Remember linear transform for Gaussians Choose W such that error covariance becomes spherical Conclusion: W is a simple function of V so how do we estimate V ?

33 Find V: Multiple covariance components = Q1Q1 Q2Q2 Estimation of hyperparameters with EM (expectation maximisation) or ReML (restricted maximum likelihood). V enhanced noise model error covariance components Q and hyperparameters

34 linear model effects estimate error estimate statistic We are there… = + GLM includes all known experimental effects and confounds Convolution with a canonical HRF High-pass filtering to account for low-frequency drifts Estimation of multiple variance components (e.g. to account for serial correlations)

35 linear model effects estimate error estimate statistic We are there… = + c = Null hypothesis: Talk: Statistical Inference and design efficiency. Next Talk

36 We are there… Time single voxel time series Mass-univariate approach: GLM applied to > 100,000 voxels Threshold of p<0.05 more than 5000 voxels significant by chance! Massive problem with multiple comparisons! Solution: Gaussian random field theory

37 Outlook: further challenges correction for multiple comparisons Talk: Multiple Comparisons Wed 8:30 – 9:30 variability in the HRF across voxels Talk: Experimental Design Wed 9:45 – 10:45 limitations of frequentist statistics Talk: entire Friday GLM ignores interactions among voxels Talk: Multivariate Analysis Thu 12:30 – 13:30

38 Thank you! Friston, Ashburner, Kiebel, Nichols, Penny (2007) Statistical Parametric Mapping: The Analysis of Functional Brain Images. Elsevier. Christensen R (1996) Plane Answers to Complex Questions: The Theory of Linear Models. Springer. Friston KJ et al. (1995) Statistical parametric maps in functional imaging: a general linear approach. Human Brain Mapping 2: Read me


Download ppt "Overview of SPM RealignmentSmoothing Normalisation General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix."

Similar presentations


Ads by Google