Download presentation

Presentation is loading. Please wait.

Published byKasandra Addy Modified over 2 years ago

1
Basis Functions The SPM MfD course 12 th Dec 2007 Elvina Chu

2
Introduction What is a basis function What is a basis function What do they do in MRI What do they do in MRI How are they useful in SPM How are they useful in SPM

3
Basis Mathematical term to describe any point in space Mathematical term to describe any point in space Euclidian i.e. the x y z co-ordinates Euclidian i.e. the x y z co-ordinates x y 4 2 i j i v = 4 i + 2 j

4
Vectors are produced as each function in the function space can be represented as a linear combination of basis functions. Linear algebra: Orthonormal i.e. same unit length with perpendicular elements Function

5
Uses in SPM Spatial normalisation to register different subjects to the same co-ordinate system Spatial normalisation to register different subjects to the same co-ordinate system Ease of reporting in standard space Ease of reporting in standard space Useful for reporting what happens generically to individuals in functional imaging Useful for reporting what happens generically to individuals in functional imaging

6

7
Uses in SPM Basis functions are used to model the haemodynamic response Basis functions are used to model the haemodynamic response Finite impulse responseFourier

8
Fourier Basis % signal change with time % signal change with time Fourier analysis: the complex wave at the top can be decomposed into the sum of the three simpler waves shown below. f(t)=h1(t)+h2(t)+h3(t) f(t) h1(t ) h2(t) h3(t)

9
Provides a reasonably good fit to the impulse response, although it lacks an undershoot. Fewer functions required to capture the typical range of impulse responses than other sets, thus reducing the degrees of freedom in design matrix Gamma Function

10
Canonical haemodynamic response function (HRF) Typical BOLD response to an impulse stimulation The response peaks approximately 5 sec after stimulation, and is followed by an undershoot.

11
Canonical HRF Temporal derivative Dispersion derivative The canonical HRF is a “typical” BOLD impulse response characterised by two gamma functions. Temporal derivative can capture differences in latency of peak response Dispersion derivative can capture differences in duration of peak response

12
Design matrix 3 regressors used to model each condition The three basis functions are: 1. Canonical HRF 2. Derivatives with respect to time 3. Derivatives with respect to dispersion Left RightMean

13
These plots show the haemodynamic response at a single voxel. The left plot shows the HRF as estimated using the simple model. Lack of fit is corrected, on the right using a more flexible model with basis functions. Comparison of the fitted response

14
Summary Basis functions identify position in space Basis functions identify position in space Used to model the HRF of BOLD response to an impulse stimulation in fMRI Used to model the HRF of BOLD response to an impulse stimulation in fMRI SPM allows you to choose from 4 different basis functions SPM allows you to choose from 4 different basis functions

15
Multiple Regression Analysis & Correlated Regressors Hanneke den Ouden Methods for Dummies /12/2007

16
Overview General Regression analysis Multiple regressions Collinearity / correlated regressors Orthogonalisation of regressors in SPM

17
Regression analysis if the model fits the data well: -R 2 is high (reflects the proportion of variance in Y explained by the regressor X) -the corresponding p value will be low regression analysis examines the relation of a dependent variable Y to specified independent variables X: Y = aX + b

18
Multiple regression analysis Multiple regression characterises the relationship between several independent variables (or regressors), X 1, X 2, X 3 etc, and a single dependent variable, Y: Y = β 1 X 1 + β 2 X 2 +…..+ β L X L + ε The X variables are combined linearly and each has its own regression coefficient β (weight) βs reflect the independent contribution of each regressor, X, to the value of the dependent variable, Y i.e. the proportion of the variance in Y accounted for by each regressor after all other regressors are accounted for

19
Multicollinearity Multiple regression results are sometimes difficult to interpret: the overall p value of a fitted model is very low i.e. the model fits the data well but individual p values for the regressors are high i.e. none of the X variables has a significant impact on predicting Y. How is this possible? Caused when two (or more) regressors are highly correlated: problem known as multicollinearity

20
Multicollinearity Are correlated regressors a problem? No when you want to predict Y from X1 and X2 Because R 2 and p will be correct Yes when you want assess impact of individual regressors Because individual p values can be misleading: a p value can be high, even though the variable is important In practice this will nearly always be the case

21
General Linear Model & Correlated Regressors

22
General Linear Model and fMRI Y = X. β + ε Observed data Y is the BOLD signal at various time points at a single voxel Design matrix Several components which explain the observed data Y: -Different stimuli -Movement regressors Parameters (or betas) Define the contribution of each component of the design matrix to the value of Y Error (or residuals) Any variance in Y that cannot be explained by the model X.β

23
Collinearity example Experiment: Which areas of the brain are active in reward processing? Subjects press a button to get a reward when they spot a red dot amongst green dots model to be fit: Y = β 1 X 1 + β 2 X 2 + ε Y = BOLD response X1 = button press (movement) X2 = response to reward

24
Collinearity example Which areas of the brain are active in reward processing? The regressors are linearly dependent (correlated), so variance attributable to an individual regressor may be confounded with other regressor(s) As a result we don’t know which part of the BOLD response is explained by movement and which by response to getting a reward this may lead to misinterpretations of activations in certain brain areas Primary motor cortex involved in reward processing?? We can’t answer the question

25
How to deal with collinearity Avoid it: Design the experiment so that the independent variables are uncorrelated Use common sense Use toolbox “Design Magic” - Multicollinearity assessment for fMRI for SPM URL: Allows you to assess the multicollinearity in your fMRI-design by calculating the amount of factor variance that is also accounted for by the other factors in the design (expressed in R 2 ). also allows you to reduce correlations between regressors through use of high- pass filters

26
How to deal with collinearity II Orthogonalise the correlated regressor variables using factor analysis (like PCA) this will produce linearly independent regressors and corresponding factor scores. these factor scores can subsequently be used instead of the original correlated regressor values However, the meaning of these factors is rather unclear… so SPM does not do this Instead SPM does something called serial orthogonalisation (note that this is only within each condition, so for each condition and its associated parametric modulators, if there are any)

27
Serial Orthogonalisation Y = 1 X 1 1 = 1.5 When we have only one regressor, things are simple…

28
Serial Orthogonalisation Y = 1 X 1 + 2 X 2 1 = 1 2 = 1 When we two correlated regressors, things become difficult… The value of 1 is now smaller, so X 1 now explains less of the variance, as X 2 explains some of the variance X 1 used to explain

29
Serial Orthogonalisation Y = 1 X 1 + 2 *X 2 * 1 = 1.5 2 * = 1 We now orthogonalise X 2 with respect to X 1, and call this X 2 * - 1 now again has the original value it had when X 2 was not included - 2 * is the same value as 2 - X 2 * is a different regressor from X 2 !!!

30
Serial Orthogonalisation in SPM Regressors are orthogonalised from left to right in the design matrix Order in which you put parametric modulators is important!!! Put the ‘most important’ modulators first (i.e the ones whose meaning you don’t want to change) If you add an orthogonalised regressor, the values of the preceding regressors do not change The regressor you orthogonalise to (X 1 ) does not change The regressor you are orthogonalising (X 2 ) does change Plot the orthogonalised regressors to see what it is you are actually estimating

31
Conclusions Correlated regressors can be a big problem when analysing / interpreting your data Try to design your experiment such that you avoid correlated regressors Estimate how much your regressors are correlated so you know what you’re getting yourself into If you cannot avoid them Think about the order of the regressors in your design matrix Look at what the regressors look like after orthogonalisation

32
Sources Will Penny & Klaas Stephan Rik Henson’s slides: cbu.cam.ac.uk/Imaging/Common/rikSPM-GLM.ppt cbu.cam.ac.uk/Imaging/Common/rikSPM-GLM.pptwww.mrc- cbu.cam.ac.uk/Imaging/Common/rikSPM-GLM.ppt Previous years’ presenters’ slides

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google