Presentation is loading. Please wait.

Presentation is loading. Please wait.

Spatial preprocessing of fMRI data

Similar presentations


Presentation on theme: "Spatial preprocessing of fMRI data"— Presentation transcript:

1 Spatial preprocessing of fMRI data
Klaas Enno Stephan Laboratory for Social and Neural Systrems Research Institute for Empirical Research in Economics University of Zurich Functional Imaging Laboratory (FIL) Wellcome Trust Centre for Neuroimaging University College London With many thanks for helpful slides to: John Ashburner Meike Grol Ged Ridgway Methods & models for fMRI data analysis 30 September 2009

2 Overview of SPM p <0.05 Image time-series Kernel Design matrix
Statistical parametric map (SPM) Realignment Smoothing General linear model Statistical inference Gaussian field theory Normalisation p <0.05 Template Parameter estimates

3 Functional MRI (fMRI) Uses echo planar imaging (EPI) for fast acquisition of T2*-weighted images. Spatial resolution: 3 mm (standard 1.5 T scanner) < 200 μm (high-field systems) Sampling speed: 1 slice: ms Requires spatial pre-processing and statistical analysis. EPI (T2*) dropout T1

4 Terminology of fMRI subjects sessions runs single run volume slices
TR = repetition time time required to scan one volume volume slices voxel

5 Terminology of fMRI Scan Volume: Field of View (FOV), e.g. 192 mm
Axial slices Slice thickness e.g., 3 mm Matrix Size e.g., 64 x 64 In-plane resolution 192 mm / 64 = 3 mm 3 mm Voxel Size (volumetric pixel)

6 Standard space The Talairach Atlas The MNI/ICBM AVG152 Template

7 World space & voxel space
(44, 66, 36) (4, 4, -2) Voxel index (45, 66, 35) World coords (2, 4, -4) mm (45, 65, 35) (2, 2, -4)

8 Changing coordinate systems
By moving to 4D, one can include translations within a single matrix multiplication These 4-by-4 “homogeneous matrices” are the currency of voxel-world mappings, affine coreg. and realignment. In SPM5 & SPM8, they are stored in the .hdr files.  In SPM2, they are stored in .mat files.  To find inverse mappings, or results of concatenating multiple transformations, we simply follow the rules of matrix algebra

9 Why does fMRI require spatial preprocessing?
Head motion artefacts during scanning Problems of EPI acquisition: distortion and signal dropouts Brains are quite different across subjects → Realignment → “Unwarping” → Normalisation (“Warping”) → Smoothing

10 Realignment or “motion correction”
Even small head movements can be a major problem: increase in residual variance data may get completely lost if sudden movements occur during a single volume movements may be correlated with the task performed Therefore: always constrain the volunteer’s head instruct him/her explicitly to remain as calm as possible do not scan for too long – everyone will move after while ! minimising movements is one of the most important factors for ensuring good data quality!

11 Realignment = rigid-body registration
Assumes that all movements are those of a rigid body, i.e. the shape of the brain does not change Two steps: Registration: optimising six parameters that describe a rigid body transformation between the source and a reference image Transformation: re-sampling according to the determined transformation

12 Linear (affine) transformations
Rigid-body transformations are a subset Parallel lines remain parallel Operations can be represented by: x1 = m11x0 + m12y0 + m13z0 + m14 y1 = m21x0 + m22y0 + m23z0 + m24 z1 = m31x0 + m32y0 + m33z0 + m34 Or as matrices:

13 2D affine transforms Translations by tx and ty
x1 = 1 x0 + 0 y0 + tx y1 = 0 x0 + 1 y0 + ty Rotation around the origin by  radians x1 = cos() x0 + sin() y0 + 0 y1 = -sin() x0 + cos() y0 + 0 Zooms by sx and sy: x1 = sx x0 + 0 y0 + 0 y1 = 0 x0 + sy y0 + 0 Shear x1 = 1 x0 + h y0 + 0 y1 = 0 x0 + 1 y0 + 0

14 Representing rotations

15 3D rigid-body transformations
A 3D rigid body transform is defined by: 3 translations - in X, Y & Z directions 3 rotations - about X, Y & Z axes Non-commutative: the order of the operations matters Translations Pitch about x axis Roll about y axis Yaw about z axis

16 Realignment Goal: minimise squared differences between source and reference image Other methods available (e.g. mutual information)

17 A special case ... If a subject remained perfectly still during a fMRI study, would realignment still be a good idea to perform? When is this issue of practical relevance?

18 Simple interpolation Nearest neighbour
Take the value of the closest voxel linear (2D: bilinear; 3D: trilinear) Just a weighted average of the neighbouring voxels f5 = f1 x2 + f2 x1 f6 = f3 x2 + f4 x1 f7 = f5 y2 + f6 y1

19 B-spline interpolation
A continuous function is represented by a linear combination of basis functions 2D B-spline basis functions of degrees 0, 1, 2 and 3 B-splines are piecewise polynomials Nearest neighbour and trilinear interpolation are the same as B-spline interpolation with degrees 0 and 1.

20 Why does fMRI require spatial preprocessing?
Head motion artefacts during scanning Problems of EPI acquisition: distortion and signal dropouts Brains are quite different across subjects → Realignment → “Unwarping” → Normalisation (“Warping”) → Smoothing

21 Residual errors after realignment
Resampling can introduce interpolation errors Slices are not acquired simultaneously rapid movements not accounted for by rigid body model Image artefacts may not move according to a rigid body model image distortion image dropout Nyquist ghost Functions of the estimated motion parameters can be included as confound regressors in subsequent statistical analyses.

22 Movement by distortion interactions
Subject disrupts B0 field, rendering it inhomogeneous → distortions in phase-encode direction Subject moves during EPI time series → distortions vary with subject orientation → shape of imaged brain varies Andersson et al. 2001, NeuroImage

23 Movement by distortion interaction

24 Movement by distortion interactions
original deformations after head rotation Andersson et al. 2001, NeuroImage: FIG. 2. Graphical explanation of the forward model defined by Eqs. (13) and (15). Upper left panel: shows an example of a random deformation field. The grey values indicate the deviation from a regular grid when sampling an object, where bright shades indicate sampling points have been deflected upwards and dark shades indicate downwards deflection. The color bar is graded in units of voxels. The contour indicates a vaguely brain like shape and the arrows shows direction and magnitude of apparent deformation of such an object when being imaged in this deformation field. Upper middle panel: shows the resulting deformed contour. Upper right panel: shows the object following a 30° rotation, where the deformation field is stationary with respect to the scanner. Note how each point on the contour has now moved to a different part of the deformation field, and is now associated with a different arrow compared to the unrotated case. Lower left panel: shows the same situation in a realigned (object) framework. Note how each point is affected by a deformation of different magnitude acting in a different direction compared to the unrotated case. Lower middle panel: shows the resulting deformed contour for the rotated case. Lower right panel: demonstrates the mismatch between two images of an identical object imaged in identical deformation fields, but where, in one case, the object had been rotated with respect to both the phase encode direction and the deformation field. Note that according to this model differential deformations will be introduced by rotations and translations alike. Note also how for a given field and “movement” the mismatch is larger than that given by Eqs. (1) and (3) (Fig. 1), since there are now two effects contributing to it. deformations after realignment mismatch in deformations Andersson et al. 2001, NeuroImage

25 Different strategies for correcting movement artefacts
liberal control: realignment only moderate control: realignment + “unwarping” strict control: realignment + inclusion of realignment parameters in statistical model

26 Why does fMRI require spatial preprocessing?
Head motion artefacts during scanning Problems of EPI acquisition: distortion and signal dropouts Brains are quite different across subjects → Realignment → “Unwarping” → Normalisation (“Warping”) → Smoothing

27 Individual brains differ in size, shape and folding

28 Spatial normalisation: why necessary?
Inter-subject averaging Increase sensitivity with more subjects Fixed-effects analysis Extrapolate findings to the population as a whole Random / mixed-effects analysis Make results from different studies comparable by bringing them into a standard coordinate system e.g. MNI space

29 Spatial normalisation: objective
Warp the images such that functionally corresponding regions from different subjects are as close together as possible Problems: Not always exact match between structure and function Different brains are organised differently Computational problems (local minima, not enough information in the images, computationally expensive) Compromise by correcting gross differences followed by smoothing of normalised images

30 Spatial normalisation: affine step
The first part is a 12 parameter affine transform 3 translations 3 rotations 3 zooms 3 shears Fits overall shape and size

31 Spatial normalisation: non-linear step
Deformations consist of a linear combination of smooth basis functions. These basis functions result from a 3D discrete cosine transform (DCT).

32 Spatial normalisation: Bayesian regularisation
Deformations consist of a linear combination of smooth basis functions  set of frequencies from a 3D discrete cosine transform. Find maximum a posteriori (MAP) estimates: simultaneously minimise squared difference between template and source image squared difference between parameters and their priors Deformation parameters MAP: “Difference” between template and source image Squared distance between parameters and their expected values (regularisation)

33 Spatial normalisation: overfitting
Without regularisation, the non-linear spatial normalisation can introduce unnecessary warps. Affine registration. (2 = 472.1) Template image Non-linear registration without regularisation. (2 = 287.3) Non-linear registration using regularisation. (2 = 302.7)

34 Segmentation GM and WM segmentations overlaid on original images
Structural image, GM and WM segments, and brain-mask (sum of GM and WM)

35 Unified segmentation with tissue class priors
Goal: for each voxel, compute probability that it belongs to a particular tissue type, given its intensity Likelihood model: Intensities are modelled by a mixture of Gaussian distributions representing different tissue classes (e.g. GM, WM, CSF). Priors are obtained from tissue probability maps (segmented images of 151 subjects). p (tissue | intensity) p (intensity | tissue) ∙ p (tissue) Ashburner & Friston 2005, NeuroImage

36 Segmentation & normalisation
Circular relationship between segmentation & normalisation: Knowing which tissue type a voxel belongs to helps normalisation. Knowing where a voxel is (in standard space) helps segmentation. Build a joint generative model: model how voxel intensities result from mixture of tissue type distributions model how tissue types of one brain have to be spatially deformed to match those of another brain Using a priori knowledge about the parameters: adopt Bayesian approach and maximise the posterior probability Ashburner & Friston 2005, NeuroImage

37 Normalisation options in practice
Conventional normalisation: either warp functional scans to EPI template directly or coregister structural scan to functional scans and then warp structural scan to T1 template; then apply these parameters to functional scans (“Normalise: Write”) Unified segmentation: coregister structural scan to functional scans unified segmentation provides normalisation parameters apply these parameters to functional scans (“Normalise: Write”)

38 Smoothing Why smooth? increase signal to noise inter-subject averaging increase validity of Gaussian Random Field theory In SPM, smoothing is a convolution with a Gaussian kernel. Kernel defined in terms of FWHM (full width at half maximum). Gaussian convolution is separable Gaussian smoothing kernel

39 Smoothing Smoothing is done by convolving with a 3D Gaussian which is defined by its full width at half maximum (FWHM). Each voxel after smoothing effectively becomes the result of applying a weighted region of interest. Before convolution Convolved with a circle Convolved with a Gaussian

40 Summary: spatial preprocessing steps
Head motion artefacts during scanning Problems of EPI acquisition: distortion and signal dropouts Brains are quite different across subjects → Realignment → “Unwarping” → Normalisation (“Warping”) → Smoothing

41 Thank you!


Download ppt "Spatial preprocessing of fMRI data"

Similar presentations


Ads by Google