The SPM MfD course 12th Dec 2007 Elvina Chu

Slides:



Advertisements
Similar presentations
The SPM MfD course 12th Dec 2007 Elvina Chu
Advertisements

Basis Functions. What’s a basis ? Can be used to describe any point in space. e.g. the common Euclidian basis (x, y, z) forms a basis according to which.
2nd level analysis – design matrix, contrasts and inference
General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN.
General Linear Model Beatriz Calvo Davina Bristow.
Non-orthogonal regressors: concepts and consequences
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Outline What is ‘1st level analysis’? The Design matrix
Design matrix, contrasts and inference
The General Linear Model Or, What the Hell’s Going on During Estimation?
The General Linear Model (GLM)
1st level analysis: basis functions and correlated regressors
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Linear Regression and Linear Prediction Predicting the score on one variable.
Parametric modulation, temporal basis functions and correlated regressors Mkael Symmonds Antoinette Nicolle Methods for Dummies 21 st January 2008.
Lorelei Howard and Nick Wright MfD 2008
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
1st Level Analysis Design Matrix, Contrasts & Inference
The General Linear Model
Contrasts and Basis Functions Hugo Spiers Adam Liston.
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
Contrasts (a revision of t and F contrasts by a very dummyish Martha) & Basis Functions (by a much less dummyish Iroise!)
Brain Mapping Unit The General Linear Model A Basic Introduction Roger Tait
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Contrasts & Statistical Inference
The General Linear Model (for dummies…) Carmen Tur and Ashwani Jha 2009.
Temporal Basis Functions Melanie Boly Methods for Dummies 27 Jan 2010.
Event-related fMRI SPM course May 2015 Helen Barron Wellcome Trust Centre for Neuroimaging 12 Queen Square.
Idiot's guide to... General Linear Model & fMRI Elliot Freeman, ICN. fMRI model, Linear Time Series, Design Matrices, Parameter estimation,
The General Linear Model
SPM short – Mai 2008 Linear Models and Contrasts Stefan Kiebel Wellcome Trust Centre for Neuroimaging.
1 st level analysis: Design matrix, contrasts, and inference Stephane De Brito & Fiona McNabe.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
SPM and (e)fMRI Christopher Benjamin. SPM Today: basics from eFMRI perspective. 1.Pre-processing 2.Modeling: Specification & general linear model 3.Inference:
The general linear model and Statistical Parametric Mapping II: GLM for fMRI Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline.
The General Linear Model Christophe Phillips SPM Short Course London, May 2013.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, October 2012.
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
Stats Methods at IC Lecture 3: Regression.
Predicting Energy Consumption in Buildings using Multiple Linear Regression Introduction Linear regression is used to model energy consumption in buildings.
The General Linear Model …a talk for dummies
Regression Analysis.
General Linear Model & Classical Inference
The general linear model and Statistical Parametric Mapping
The General Linear Model
Design Matrix, General Linear Modelling, Contrasts and Inference
SPM short course at Yale – April 2005 Linear Models and Contrasts
MfD 22/11/17 Kate Ledingham and Arabella Bird
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
Barbora Ondrejickova Methods for Dummies 23rd November 2016
The General Linear Model (GLM)
Sam Ereira Methods for Dummies 13th January 2016
Contrasts & Statistical Inference
The General Linear Model
Rachel Denison & Marsha Quallo
The general linear model and Statistical Parametric Mapping
Product moment correlation
The General Linear Model
The General Linear Model (GLM)
SPM short course – May 2009 Linear Models and Contrasts
Contrasts & Statistical Inference
Chapter 3 General Linear Model
MfD 04/12/18 Alice Accorroni – Elena Amoruso
The General Linear Model
The General Linear Model (GLM)
The General Linear Model
The General Linear Model
Linear Algebra and Matrices
Contrasts & Statistical Inference
Presentation transcript:

The SPM MfD course 12th Dec 2007 Elvina Chu Basis Functions The SPM MfD course 12th Dec 2007 Elvina Chu

Introduction What is a basis function What do they do in MRI How are they useful in SPM

Basis Mathematical term to describe any point in space Euclidian i.e. the x y z co-ordinates y v = 4 i + 2 j 2 j i i 4 x

Function Vectors are produced as each function in the function space can be represented as a linear combination of basis functions. Linear algebra: Orthonormal i.e. same unit length with perpendicular elements

Uses in SPM Spatial normalisation to register different subjects to the same co-ordinate system Ease of reporting in standard space Useful for reporting what happens generically to individuals in functional imaging

Uses in SPM Basis functions are used to model the haemodynamic response Finite impulse response Fourier

Fourier Basis % signal change with time Fourier analysis: the complex wave at the top can be decomposed into the sum of the three simpler waves shown below. f(t)=h1(t)+h2(t)+h3(t) f(t) h1(t) h2(t) h3(t)

Gamma Function Provides a reasonably good fit to the impulse response, although it lacks an undershoot. Fewer functions required to capture the typical range of impulse responses than other sets, thus reducing the degrees of freedom in design matrix

Canonical haemodynamic response function (HRF) Typical BOLD response to an impulse stimulation The response peaks approximately 5 sec after stimulation, and is followed by an undershoot.

Canonical HRF Temporal derivative Dispersion derivative The canonical HRF is a “typical” BOLD impulse response characterised by two gamma functions. Temporal derivative can capture differences in latency of peak response Dispersion derivative can capture differences in duration of peak response

Design matrix 3 regressors used to model each condition Left Right Mean 3 regressors used to model each condition The three basis functions are: 1. Canonical HRF 2. Derivatives with respect to time 3. Derivatives with respect to dispersion

Comparison of the fitted response These plots show the haemodynamic response at a single voxel. The left plot shows the HRF as estimated using the simple model. Lack of fit is corrected, on the right using a more flexible model with basis functions.

Summary Basis functions identify position in space Used to model the HRF of BOLD response to an impulse stimulation in fMRI SPM allows you to choose from 4 different basis functions

Hanneke den Ouden Methods for Dummies 2007 12/12/2007 Multiple Regression Analysis & Correlated Regressors In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders. Hanneke den Ouden Methods for Dummies 2007 12/12/2007

Overview General Regression analysis Multiple regressions Collinearity / correlated regressors Orthogonalisation of regressors in SPM In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Regression analysis regression analysis examines the relation of a dependent variable Y to specified independent variables X: Y = aX + b In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders. if the model fits the data well: R2 is high (reflects the proportion of variance in Y explained by the regressor X) the corresponding p value will be low

Multiple regression analysis Multiple regression characterises the relationship between several independent variables (or regressors), X1, X2, X3 etc, and a single dependent variable, Y: Y = β1X1 + β2X2 +…..+ βLXL + ε The X variables are combined linearly and each has its own regression coefficient β (weight) βs reflect the independent contribution of each regressor, X, to the value of the dependent variable, Y i.e. the proportion of the variance in Y accounted for by each regressor after all other regressors are accounted for In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Multicollinearity Multiple regression results are sometimes difficult to interpret: the overall p value of a fitted model is very low i.e. the model fits the data well but individual p values for the regressors are high i.e. none of the X variables has a significant impact on predicting Y. How is this possible? Caused when two (or more) regressors are highly correlated: problem known as multicollinearity In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

In practice this will nearly always be the case Multicollinearity Are correlated regressors a problem? No when you want to predict Y from X1 and X2 Because R2 and p will be correct Yes when you want assess impact of individual regressors Because individual p values can be misleading: a p value can be high, even though the variable is important In practice this will nearly always be the case In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Correlated Regressors General Linear Model & Correlated Regressors In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Y = X . β + ε General Linear Model and fMRI Observed data Y is the BOLD signal at various time points at a single voxel Design matrix Several components which explain the observed data Y: Different stimuli Movement regressors Parameters (or betas) Define the contribution of each component of the design matrix to the value of Y Error (or residuals) Any variance in Y that cannot be explained by the model X.β In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Collinearity example Experiment: Which areas of the brain are active in reward processing? Subjects press a button to get a reward when they spot a red dot amongst green dots model to be fit: Y = β1X1 + β2X2 + ε Y = BOLD response X1 = button press (movement) X2 = response to reward In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

 We can’t answer the question Collinearity example Which areas of the brain are active in reward processing? The regressors are linearly dependent (correlated), so variance attributable to an individual regressor may be confounded with other regressor(s) As a result we don’t know which part of the BOLD response is explained by movement and which by response to getting a reward this may lead to misinterpretations of activations in certain brain areas Primary motor cortex involved in reward processing??  We can’t answer the question In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

How to deal with collinearity Avoid it: Design the experiment so that the independent variables are uncorrelated Use common sense Use toolbox “Design Magic” - Multicollinearity assessment for fMRI for SPM URL: http://www.matthijs-vink.com/tools.html Allows you to assess the multicollinearity in your fMRI-design by calculating the amount of factor variance that is also accounted for by the other factors in the design (expressed in R2). also allows you to reduce correlations between regressors through use of high-pass filters In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

How to deal with collinearity II Orthogonalise the correlated regressor variables using factor analysis (like PCA) this will produce linearly independent regressors and corresponding factor scores. these factor scores can subsequently be used instead of the original correlated regressor values However, the meaning of these factors is rather unclear… so SPM does not do this Instead SPM does something called serial orthogonalisation (note that this is only within each condition, so for each condition and its associated parametric modulators, if there are any) In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Serial Orthogonalisation When we have only one regressor, things are simple… Y = 1X1 1 = 1.5 In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Serial Orthogonalisation When we two correlated regressors, things become difficult… The value of 1 is now smaller, so X1 now explains less of the variance, as X2 explains some of the variance X1 used to explain Y = 1X1 + 2X2 1 = 1 2 = 1 In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Serial Orthogonalisation We now orthogonalise X2 with respect to X1, and call this X2* - 1 now again has the original value it had when X2 was not included - 2* is the same value as 2 - X2* is a different regressor from X2!!! Y = 1X1 + 2*X2* 1 = 1.5 2* = 1 In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Serial Orthogonalisation in SPM Regressors are orthogonalised from left to right in the design matrix Order in which you put parametric modulators is important!!! Put the ‘most important’ modulators first (i.e the ones whose meaning you don’t want to change) If you add an orthogonalised regressor, the  values of the preceding regressors do not change The regressor you orthogonalise to (X1) does not change The regressor you are orthogonalising (X2) does change Plot the orthogonalised regressors to see what it is you are actually estimating In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Conclusions Correlated regressors can be a big problem when analysing / interpreting your data Try to design your experiment such that you avoid correlated regressors Estimate how much your regressors are correlated so you know what you’re getting yourself into If you cannot avoid them Think about the order of the regressors in your design matrix Look at what the regressors look like after orthogonalisation In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.

Sources Will Penny & Klaas Stephan Rik Henson’s slides: www.mrc-cbu.cam.ac.uk/Imaging/Common/rikSPM-GLM.ppt Previous years’ presenters’ slides In my presentation I will discuss some results of a study that we conducted in the last few years on the vowel systems of eight regional varieties of Standard Dutch as spoken in The Netherlands and in Flanders.