Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and.

Similar presentations


Presentation on theme: "Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and."— Presentation transcript:

1 Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and Neural System Research Institute for Empirical Research in Economics University of Zurich Functional Imaging Laboratory (FIL) Wellcome Trust Centre for Neuroimaging University College London With many thanks for slides & images to: FIL Methods group

2 Overview of SPM RealignmentSmoothing Normalisation General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix Template Kernel Gaussian field theory p <0.05 Statisticalinference

3 Time BOLD signal Time single voxel time series single voxel time series Voxel-wise time series analysis model specification model specification parameter estimation parameter estimation hypothesis statistic SPM

4 Overview A recap of model specification and parameter estimation Hypothesis testing Contrasts and estimability T-tests F-tests Design orthogonality Design efficiency

5 Mass-univariate analysis: voxel-wise GLM = + y y X X Model is specified by 1.Design matrix X 2.Assumptions about e Model is specified by 1.Design matrix X 2.Assumptions about e N: number of scans p: number of regressors N: number of scans p: number of regressors The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds.

6 Parameter estimation = + Ordinary least squares estimation (OLS) (assuming i.i.d. error): Objective: estimate parameters to minimize y X

7 OLS parameter estimation These estimators minimise. They are found solving either The Ordinary Least Squares (OLS) estimators are: Under i.i.d. assumptions, the OLS estimates correspond to ML estimates: or NB: precision of our estimates depends on design matrix!

8 Maximum likelihood (ML) estimation probability density function (  fixed!) likelihood function ( y fixed!) ML estimator For cov(e)=   V, the ML estimator is equivalent to a weighted least sqaures (WLS) estimate (with W=V -1/2 ) : For cov(e)=   I, the ML estimator is equivalent to the OLS estimator: OLS WLS

9 c = 1 0 0 0 0 0 0 0 0 0 0 ReML- estimates SPM: t-statistic based on ML estimates For brevity:

10 Statistic A statistic is the result of applying a function to a sample (set of data). More formally, statistical theory defines a statistic as a function of a sample where the function itself is independent of the sample's distribution. The term is used both for the function and for the value of the function on a given sample. A statistic is distinct from an unknown statistical parameter, which is a population property and can only be estimated approximately from a sample. A statistic used to estimate a parameter is called an estimator. For example, the sample mean is a statistic and an estimator for the population mean, which is a parameter.

11 Hypothesis testing “Null hypothesis” H 0 = “there is no effect”  c T  = 0 This is what we want to disprove.  The “alternative hypothesis” H 1 represents the outcome of interest. To test an hypothesis, we construct a “test statistic”. The test statistic T The test statistic summarises the evidence for H 0. Typically, the test statistic is small in magnitude when H 0 is true and large when H 0 is false.  We need to know the distribution of T under the null hypothesis. Null Distribution of T

12 Hypothesis testing Observation of test statistic t, a realisation of T: A p-value summarises evidence against H 0. This is the probability of observing t, or a more extreme value, under the null hypothesis: Null Distribution of T Type I Error α : Acceptable false positive rate α. Threshold u α controls the false positive rate t p Null Distribution of T  uu The conclusion about the hypothesis: We reject H 0 in favour of H 1 if t > u α

13 Types of error Actual condition Test result Reject H 0 Fail to reject H 0 H 0 trueH 0 false True negative True positive False positive Type I Error  False negative Type II Error β

14 One cannot accept the null hypothesis (one can just fail to reject it) Absence of evidence is not evidence of absence! If we do not reject H 0, then all can we say is that there is not enough evidence in the data to reject H 0. This does not mean that we can accept H 0. What does this mean for neuroimaging results based on classical statistics? A failure to find an “activation” in a particular area does not mean we can conclude that this area is not involved in the process of interest.

15 Contrasts A contrast c T  selects a specific effect of interest:  a contrast vector c is a vector of length p  c T  is a linear combination of regression coefficients  Under i.i.d assumptions: We are usually not interested in the whole  vector. c T β = 1  1 + 0  2 + 0  3 + 0  4 + 0  5 +... c T = [1 0 0 0 0 …] c T β = 0  1 + -1  2 + 1  3 + 0  4 + 0  5 +... c T = [0 -1 1 0 0 …] NB: the precision of our estimates depends on design matrix and the chosen contrast !

16 Estimability of a contrast If X is not of full rank then different parameters can give identical predictions. The parameters are therefore ‘non-unique’, ‘non- identifiable’ or ‘non-estimable’. For such models, X T X is not invertible so we must resort to generalised inverses (SPM uses the Moore-Penrose pseudo-inverse). This gives a parameter vector that has the smallest norm of all possible solutions. 1 0 1 0 1 1 One-way ANOVA (unpaired two-sample t-test) Rank(X)=2 [1 0 0], [0 1 0], [0 0 1] are not estimable. [1 0 1], [0 1 1], [1 -1 0], [0.5 0.5 1] are estimable. Even parameters are non-estimable, certain contrasts may well be! Examples: parameters images Factor 1 2 Mean parameter estimability (gray   not uniquely specified)

17 Student's t-distribution first described by William Sealy Gosset, a statistician at the Guinness brewery at Dublin t-statistic is a signal-to-noise measure: t = effect / standard deviation t-distribution is an approximation to the normal distribution for small samples t-contrasts are simply combinations of the betas  the t-statistic does not depend on the scaling of the regressors or on the scaling of the contrast Unilateral test: vs. -5-4-3-2012345 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 =1 =2 =5 =10 =  Probability density function of Student’s t distribution

18 c T = 1 0 0 0 0 0 0 0 t = contrast of estimated parameters variance estimate box-car amplitude > 0 ? =  1 = c T  > 0 ?  1  2  3  4  5... t-contrasts – SPM{t} Question: Null hypothesis: H 0 : c T  =0 Test statistic:

19 t-contrasts in SPM ResMS image con_???? image beta_???? images spmT_???? image SPM{t} For a given contrast c:

20 t-contrast: a simple example Q: activation during listening ? c T = [ 1 0 ] Null hypothesis: Passive word listening versus rest SPMresults: Height threshold T = 3.2057 {p<0.001} Design matrix 0.511.522.5 10 20 30 40 50 60 70 80 1 voxel-level p uncorrected T ( Z  ) mm mm mm 13.94 Inf0.000-63 -27 15 12.04 Inf0.000-48 -33 12 11.82 Inf0.000-66 -21 6 13.72 Inf0.000 57 -21 12 12.29 Inf0.000 63 -12 -3 9.89 7.830.000 57 -39 6 7.39 6.360.000 36 -30 -15 6.84 5.990.000 51 0 48 6.36 5.650.000-63 -54 -3 6.19 5.530.000-30 -33 -18 5.96 5.360.000 36 -27 9 5.84 5.270.000-45 42 9 5.44 4.970.000 48 27 24 5.32 4.870.000 36 -27 42

21 F-test: the extra-sum-of-squares principle Model comparison: Full vs. reduced model Full model ( X 0 + X 1 )? Null Hypothesis H 0 : True model is X 0 (reduced model) X1X1 X0X0 RSS Or reduced model? X0X0 RSS 0 F-statistic: ratio of unexplained variance under X 0 and total unexplained variance under the full model 1 = rank(X) – rank(X 0 ) 2 = N – rank(X)

22 F-test: multidimensional contrasts – SPM{F} Tests multiple linear hypotheses: 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 c T = H 0 :  3 =  4 =  =  9 = 0 X 1 (  3-9 ) X0X0 Full model?Reduced model? H 0 : True model is X 0 X0X0 test H 0 : c T  = 0 ? SPM{F 6,322 }

23 F-contrast in SPM ResMS image beta_???? images spmF_???? images SPM{F} ess_???? images ( RSS 0 - RSS )

24 F-test example: movement related effects To assess movement-related activation: There is a lot of residual movement-related artifact in the data (despite spatial realignment), which tends to be concentrated near the boundaries of tissue types. By including the realignment parameters in our design matrix, we can “regress out” linear components of subject movement, reducing the residual error, and hence improve our statistics for the effects of interest.

25 Differential F-contrasts Think of it as constructing 3 regressors from the 3 differences and complement this new design matrix such that data can be fitted in the same exact way (same error, same fitted data).

26 F-test: a few remarks F-tests can be viewed as testing for the additional variance explained by a larger model wrt. a simpler (nested) model  model comparison F tests: a weighted sum of squares of one or several combinations of the regression coefficients . In practice, partitioning of X into [ X 0 X 1 ] is done by multidimensional contrasts. F-tests are not directional: When testing a uni-dimensional contrast with an F-test, for example  1 –  2, the result will be the same as testing  2 –  1. Hypotheses: Null hypothesis H 0 :β 1 = β 2 =... = β p = 0 Alternative hypothesis H 1 :At least one β k ≠ 0

27 True signal (--) and observed signal Model (green, peak at 6sec) TRUE signal (blue, peak at 3sec) Fitting (  1 = 0.2, mean = 0.11)  Test for the green regressor not significant Residuals (still contain some signal) Example: a suboptimal model

28 e =+ Y X   1 = 0.22  2 = 0.11 Residual Var.= 0.3 p(Y| b 1 = 0)  p-value = 0.1 (t-test) p(Y| b 1 = 0)  p-value = 0.2 (F-test) Example: a suboptimal model

29  t-test of the green regressor almost significant  F-test very significant  t-test of the red regressor very significant A better model True signal + observed signal Model (green and red) and true signal (blue ---) Red regressor : temporal derivative of the green regressor Total fit (blue) and partial fit (green & red) Adjusted and fitted signal Residuals (less variance & structure)

30 e =+ Y X   1 = 0.22  2 = 2.15  3 = 0.11 Residual Var. = 0.2 p(Y| b 1 = 0)  p-value = 0.07 (t-test) p(Y| b 1 = 0, b 2 = 0)  p-value = 0.000001 (F-test) A better model

31 x1x1 x2x2 x2*x2* y Correlation among regressors When x 2 is orthogonalized with regard to x 1, only the parameter estimate for x 1 changes, not that for x 2 ! Correlated regressors = explained variance is shared between regressors

32 Design orthogonality For each pair of columns of the design matrix, the orthogonality matrix depicts the magnitude of the cosine of the angle between them, with the range 0 to 1 mapped from white to black. The cosine of the angle between two vectors a and b is obtained by: If both vectors have zero mean then the cosine of the angle between the vectors is the same as the correlation between the two variates.

33 True signal Fit (blue : total fit) Residual Model (green and red) Correlated regressors

34  =+ Y X   1 = 0.79  2 = 0.85  3 = 0.06 Residual var. = 0.3 p(Y| b 1 = 0)  p-value = 0.08 (t-test) P(Y| b 2 = 0)  p-value = 0.07 (t-test) p(Y| b 1 = 0, b 2 = 0)  p-value = 0.002 (F-test) Correlated regressors 12 1 2

35 True signal Residuals (do not change) Fit (does not change) Model (green and red) red regressor has been orthogonalised with respect to the green one  remove everything that correlates with the green regressor After orthogonalisation

36  =+ Y X  Residual var. = 0.3 p(Y| b 1 = 0) p-value = 0.0003 (t-test) p(Y| b 2 = 0) p-value = 0.07 (t-test) p(Y| b 1 = 0, b 2 = 0) p-value = 0.002 (F-test) (0.79) (0.85) (0.06)  1 = 1.47  2 = 0.85  3 = 0.06 12 1 2 After orthogonalisation does change does not change does not change

37 Design efficiency The aim is to minimize the standard error of a t-contrast (i.e. the denominator of a t-statistic). This is equivalent to maximizing the efficiency ε : Noise variance Design variance If we assume that the noise variance is independent of the specific design: This is a relative measure: all we can say is that one design is more efficient than another (for a given contrast). NB: efficiency depends on design matrix and the chosen contrast !

38 Design efficiency X T X is the covariance matrix of the regressors in the design matrix efficiency decreases with increasing covariance but note that efficiency differs across contrasts c T = [1 0]→ ε = 0.19 c T = [1 1]→ ε = 0.05 c T = [1 -1]→ ε = 0.95 blue dots: noise with the covariance structure of X T X

39 Example: working memory A: Response follows each stimulus with (short) fixed delay. B: Jittering the delay between stimuli and responses. C: Requiring a response only for half of all trials (randomly chosen). Stimulus Response Stimulus Response Stimulus Response ABC Time (s) Correlation = -.65 Efficiency ([1 0]) = 29 Correlation = +.33 Efficiency ([1 0]) = 40 Correlation = -.24 Efficiency ([1 0]) = 47

40 Bibliography Friston KJ et al. (2007) Statistical Parametric Mapping: The Analysis of Functional Brain Images. Elsevier. Christensen R (1996) Plane Answers to Complex Questions: The Theory of Linear Models. Springer. Friston KJ et al. (1995) Statistical parametric maps in functional imaging: a general linear approach. Human Brain Mapping 2: 189-210. Mechelli A et al. (2003) Estimating efficiency a priori: a comparison of blocked and randomized designs. NeuroImage 18:798-805.

41 Thank you


Download ppt "Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and."

Similar presentations


Ads by Google