Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Neuroanatomy John Ashburner SmoothingSmoothing Motion CorrectionMotion Correction Between Modality Co-registrationBetween.

Similar presentations


Presentation on theme: "Computational Neuroanatomy John Ashburner SmoothingSmoothing Motion CorrectionMotion Correction Between Modality Co-registrationBetween."— Presentation transcript:

1

2 Computational Neuroanatomy John Ashburner SmoothingSmoothing Motion CorrectionMotion Correction Between Modality Co-registrationBetween Modality Co-registration Spatial NormalisationSpatial Normalisation SegmentationSegmentation MorphometryMorphometry SmoothingSmoothing Motion CorrectionMotion Correction Between Modality Co-registrationBetween Modality Co-registration Spatial NormalisationSpatial Normalisation SegmentationSegmentation MorphometryMorphometry

3 OverviewOverview Motion correction smoothing Spatial normalisation General Linear Model Statistical Parametric Map fMRI time-series Parameter Estimates Design matrix anatomical reference kernel

4 SmoothingSmoothing Why Smooth?Why Smooth? –Potentially increase signal to noise. –Inter-subject averaging. –Increase validity of SPM. In SPM, smoothing is a convolution with a Gaussian kernel.In SPM, smoothing is a convolution with a Gaussian kernel. Kernel defined in terms of FWHM (full width at half maximum).Kernel defined in terms of FWHM (full width at half maximum). Why Smooth?Why Smooth? –Potentially increase signal to noise. –Inter-subject averaging. –Increase validity of SPM. In SPM, smoothing is a convolution with a Gaussian kernel.In SPM, smoothing is a convolution with a Gaussian kernel. Kernel defined in terms of FWHM (full width at half maximum).Kernel defined in terms of FWHM (full width at half maximum). Gaussian convolution is separable Gaussian smoothing kernel

5 SmoothingSmoothing Before convolutionConvolved with a circleConvolved with a Gaussian Smoothing is done by convolving with a 3D Gaussian - defined by its full width at half maximum (FWHM) Each voxel after smoothing effectively becomes the result of applying a weighted region of interest (ROI).

6 Reasons for Motion Correction Subjects will always move in the scanner.Subjects will always move in the scanner. –movement may be related to the tasks performed. When identifying areas in the brain that appear activated due to the subject performing a task, it may not be possible to discount artefacts that have arisen due to motion.When identifying areas in the brain that appear activated due to the subject performing a task, it may not be possible to discount artefacts that have arisen due to motion. The sensitivity of the analysis is determined by the amount of residual noise in the image series, so movement that is unrelated to the task will add to this noise and reduce the sensitivity.The sensitivity of the analysis is determined by the amount of residual noise in the image series, so movement that is unrelated to the task will add to this noise and reduce the sensitivity. Subjects will always move in the scanner.Subjects will always move in the scanner. –movement may be related to the tasks performed. When identifying areas in the brain that appear activated due to the subject performing a task, it may not be possible to discount artefacts that have arisen due to motion.When identifying areas in the brain that appear activated due to the subject performing a task, it may not be possible to discount artefacts that have arisen due to motion. The sensitivity of the analysis is determined by the amount of residual noise in the image series, so movement that is unrelated to the task will add to this noise and reduce the sensitivity.The sensitivity of the analysis is determined by the amount of residual noise in the image series, so movement that is unrelated to the task will add to this noise and reduce the sensitivity. registration - i.e. determining the 6 parameters that describe the rigid body transformation between each image and a reference image.registration - i.e. determining the 6 parameters that describe the rigid body transformation between each image and a reference image. transformation - i.e. re- sampling each image according to the determined transformation parameters.transformation - i.e. re- sampling each image according to the determined transformation parameters. registration - i.e. determining the 6 parameters that describe the rigid body transformation between each image and a reference image.registration - i.e. determining the 6 parameters that describe the rigid body transformation between each image and a reference image. transformation - i.e. re- sampling each image according to the determined transformation parameters.transformation - i.e. re- sampling each image according to the determined transformation parameters. The Steps in Motion Correction

7 RegistrationRegistration Determine the rigid body transformation that minimises the sum of squared difference between images.Determine the rigid body transformation that minimises the sum of squared difference between images. Rigid body transformation is defined by:Rigid body transformation is defined by: –3 translations - in X, Y & Z directions. –3 rotations - about X, Y & Z axes. Operations can be represented as affine transformation matrixes:Operations can be represented as affine transformation matrixes: x 1 = m 1,1 x 0 + m 1,2 y 0 + m 1,3 z 0 + m 1,4 y 1 = m 2,1 x 0 + m 2,2 y 0 + m 2,3 z 0 + m 2,4 z 1 = m 3,1 x 0 + m 3,2 y 0 + m 3,3 z 0 + m 3,4 Determine the rigid body transformation that minimises the sum of squared difference between images.Determine the rigid body transformation that minimises the sum of squared difference between images. Rigid body transformation is defined by:Rigid body transformation is defined by: –3 translations - in X, Y & Z directions. –3 rotations - about X, Y & Z axes. Operations can be represented as affine transformation matrixes:Operations can be represented as affine transformation matrixes: x 1 = m 1,1 x 0 + m 1,2 y 0 + m 1,3 z 0 + m 1,4 y 1 = m 2,1 x 0 + m 2,2 y 0 + m 2,3 z 0 + m 2,4 z 1 = m 3,1 x 0 + m 3,2 y 0 + m 3,3 z 0 + m 3,4 TranslationsPitchRollYaw Rigid body transformations parameterised by:

8 Residual Errors from fMRI Gaps between slices can cause aliasing artefactsGaps between slices can cause aliasing artefacts Re-sampling can introduce errorsRe-sampling can introduce errors –especially tri-linear interpolation Ghosts (and other artefacts) in the imagesGhosts (and other artefacts) in the images –do not move according to the same rigid body rules as the subject Slices are not acquired simultaneouslySlices are not acquired simultaneously –rapid movements not accounted for by rigid body model fMRI images are distortedfMRI images are distorted –rigid body model does not model these types of distortion Spin excitation history effectsSpin excitation history effects –variations in residual magnetisation Functions of the estimated motion parameters can be used as confounds in subsequent analyses. Residual Errors from fMRI Gaps between slices can cause aliasing artefactsGaps between slices can cause aliasing artefacts Re-sampling can introduce errorsRe-sampling can introduce errors –especially tri-linear interpolation Ghosts (and other artefacts) in the imagesGhosts (and other artefacts) in the images –do not move according to the same rigid body rules as the subject Slices are not acquired simultaneouslySlices are not acquired simultaneously –rapid movements not accounted for by rigid body model fMRI images are distortedfMRI images are distorted –rigid body model does not model these types of distortion Spin excitation history effectsSpin excitation history effects –variations in residual magnetisation Functions of the estimated motion parameters can be used as confounds in subsequent analyses. Residual Errors from PET Incorrect attenuation correction because transmission scan no longer aligned with emission scans.Incorrect attenuation correction because transmission scan no longer aligned with emission scans. Residual Errors from PET Incorrect attenuation correction because transmission scan no longer aligned with emission scans.Incorrect attenuation correction because transmission scan no longer aligned with emission scans.TransformationTransformation One if the simplest re- sampling methods is tri- linear interpolation. Other methods include nearest neighbour re- sampling, and various forms of sinc interpolation using different numbers of neighbouring voxels.

9 Between Modality Co-registration Not based on simply minimising mean squared difference between images.Not based on simply minimising mean squared difference between images. A three step approach is used instead.A three step approach is used instead. 1) Simultaneous affine registrations between each image and template images of same modality. 2) Partitioning of images into grey and white matter. 3) Final simultaneous registration of image partitions. Not based on simply minimising mean squared difference between images.Not based on simply minimising mean squared difference between images. A three step approach is used instead.A three step approach is used instead. 1) Simultaneous affine registrations between each image and template images of same modality. 2) Partitioning of images into grey and white matter. 3) Final simultaneous registration of image partitions. Rigid registration between high resolution structural images and echo planer functional images is a problem. Results are only approximate because of spatial distortions of EPI data.

10 Third Step - Registration of Partitions. Grey and white matter partitions are registered using a rigid body transformation. Simultaneously minimise sum of squared difference. First Step - Affine Registrations. Requires template images of same modalities.Requires template images of same modalities. Both images are registered - using 12 parameter affine transformations - to their corresponding templates by minimising the mean squared difference.Both images are registered - using 12 parameter affine transformations - to their corresponding templates by minimising the mean squared difference. Only the rigid-body transformation parameters differ between the two registrations.Only the rigid-body transformation parameters differ between the two registrations. This gives:This gives: –rigid body mapping between the images. – affine mappings between the images and the templates. Requires template images of same modalities.Requires template images of same modalities. Both images are registered - using 12 parameter affine transformations - to their corresponding templates by minimising the mean squared difference.Both images are registered - using 12 parameter affine transformations - to their corresponding templates by minimising the mean squared difference. Only the rigid-body transformation parameters differ between the two registrations.Only the rigid-body transformation parameters differ between the two registrations. This gives:This gives: –rigid body mapping between the images. – affine mappings between the images and the templates. Second Step - Segmentation. ‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF.‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF. Additional information is obtained from a priori probability images, which are overlaid using previously determined affine transformations.Additional information is obtained from a priori probability images, which are overlaid using previously determined affine transformations. ‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF.‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF. Additional information is obtained from a priori probability images, which are overlaid using previously determined affine transformations.Additional information is obtained from a priori probability images, which are overlaid using previously determined affine transformations.

11 Between Modality Coregistration using Mutual Information PET T1 weighted MRI An alternative between modality registration method available within SPM99 maximises Mutual Information in the 2D histogram. For histograms normalised to integrate to unity, the Mutual Information is defined by:  i  j h ij log h ij  k h ik  l h lj

12 Spatial normalisation Inter-subject averagingInter-subject averaging –extrapolate findings to the population as a whole –increase activation signal above that obtained from single subject –increase number of possible degrees of freedom allowed in statistical model Enable reporting of activations as co-ordinates within a known standard spaceEnable reporting of activations as co-ordinates within a known standard space –e.g. the space described by Talairach & Tournoux Inter-subject averagingInter-subject averaging –extrapolate findings to the population as a whole –increase activation signal above that obtained from single subject –increase number of possible degrees of freedom allowed in statistical model Enable reporting of activations as co-ordinates within a known standard spaceEnable reporting of activations as co-ordinates within a known standard space –e.g. the space described by Talairach & Tournoux Warp the images such that functionally homologous regions from the different subjects are as close together as possibleWarp the images such that functionally homologous regions from the different subjects are as close together as possible –Problems: no exact match between structure and functionno exact match between structure and function different brains are organised differentlydifferent brains are organised differently computational problems (local minima, not enough information in the images, computationally expensive)computational problems (local minima, not enough information in the images, computationally expensive) Compromise by correcting for gross differences followed by smoothing of normalised imagesCompromise by correcting for gross differences followed by smoothing of normalised images Warp the images such that functionally homologous regions from the different subjects are as close together as possibleWarp the images such that functionally homologous regions from the different subjects are as close together as possible –Problems: no exact match between structure and functionno exact match between structure and function different brains are organised differentlydifferent brains are organised differently computational problems (local minima, not enough information in the images, computationally expensive)computational problems (local minima, not enough information in the images, computationally expensive) Compromise by correcting for gross differences followed by smoothing of normalised imagesCompromise by correcting for gross differences followed by smoothing of normalised images

13 Spatial Normalisation Original image Template image Spatially normalised Determine the spatial transformation that minimises the sum of squared difference between an image and a linear combination of one or more templates. Begins with an affine registration to match the size and position of the image. Followed by a global non-linear warping to match the overall brain shape. Uses a Bayesian framework to simultaneously maximise the smoothness of the warps. Deformation field

14 Six affine registered images. Six basis function registered images Affine versus affine and non-linear spatial normalisation

15 EPI T2 T1Transm PDPET 305T1 PD T2 SS Template Images“Canonical” images A wider range of different contrasts can be normalised by registering to a linear combination of template images. Spatial normalisation can be weighted so that non-brain voxels do not influence the result. Similar weighting masks can be used for normalising lesioned brains.

16 Bayes rule states: p(q|e)  p(e|q) p(q) – –p(q|e) is the a posteriori probability of parameters q given errors e. – –p(e|q) is the likelihood of observing errors e given parameters q. – –p(q) is the a priori probability of parameters q. Maximum a posteriori (MAP) estimate maximises p(q|e). Maximising p(q|e) is equivalent to minimising the Gibbs potential of the posterior distribution (H(q|e), where H(q|e)  -log p(q|e)). The posterior potential is the sum of the likelihood and prior potentials: H(q|e) = H(e|q) + H(q) + c – –The likelihood potential (H(e|q)  -log p(e|q)) is based upon the sum of squared difference between the images. – –The prior potential (H(q)  -log p(q)) penalises unlikely deformations. Bayes rule states: p(q|e)  p(e|q) p(q) – –p(q|e) is the a posteriori probability of parameters q given errors e. – –p(e|q) is the likelihood of observing errors e given parameters q. – –p(q) is the a priori probability of parameters q. Maximum a posteriori (MAP) estimate maximises p(q|e). Maximising p(q|e) is equivalent to minimising the Gibbs potential of the posterior distribution (H(q|e), where H(q|e)  -log p(q|e)). The posterior potential is the sum of the likelihood and prior potentials: H(q|e) = H(e|q) + H(q) + c – –The likelihood potential (H(e|q)  -log p(e|q)) is based upon the sum of squared difference between the images. – –The prior potential (H(q)  -log p(q)) penalises unlikely deformations. Bayesian Formulation

17 Spatial Normalisation - affine The first part of spatial normalisation is a 12 parameter Affine TransformationThe first part of spatial normalisation is a 12 parameter Affine Transformation –3 translations –3 rotations –3 zooms –3 shears The first part of spatial normalisation is a 12 parameter Affine TransformationThe first part of spatial normalisation is a 12 parameter Affine Transformation –3 translations –3 rotations –3 zooms –3 shears Empirically generated priors Find the parameters that minimise the sum of squared difference between the image and template(s) - and also the square of the number of standard deviations away from the expected parameter values.

18 Spatial Normalisation - Non-linear Deformations consist of a linear combination of smooth basis images. These are the lowest frequency basis images of a 3-D discrete cosine transform (DCT). Can be generated rapidly from a separable form. Algorithm simultaneously minimises – –Sum of squared difference between template and object image. – –Squared distance between the parameters and their known expectation (p T C 0 -1 p)..p T C 0 -1 p describes the membrane energy of the deformations. Algorithm simultaneously minimises – –Sum of squared difference between template and object image. – –Squared distance between the parameters and their known expectation (p T C 0 -1 p)..p T C 0 -1 p describes the membrane energy of the deformations.

19 Template image Template image Affine Registration. (  2 = 472.1) Affine Registration. (  2 = 472.1) Non-linear registration without regularisation. (  2 = 287.3) Non-linear registration without regularisation. (  2 = 287.3) Non-linear registration using regularisation. (  2 = 302.7) Non-linear registration using regularisation. (  2 = 302.7) Without the Bayesian formulation, the non-linear spatial normalisation can introduce unnecessary warping into the spatially normalised images.

20 Segmentation.Segmentation. ‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF.‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF. Additional information is obtained from prior probability images, which are overlaid.Additional information is obtained from prior probability images, which are overlaid. Assumes that each MRI voxel is one of a number of distinct tissue types (clusters).Assumes that each MRI voxel is one of a number of distinct tissue types (clusters). Each cluster has a (multivariate) normal distribution.Each cluster has a (multivariate) normal distribution. ‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF.‘Mixture Model’ cluster analysis to classify MR image (or images) as GM, WM & CSF. Additional information is obtained from prior probability images, which are overlaid.Additional information is obtained from prior probability images, which are overlaid. Assumes that each MRI voxel is one of a number of distinct tissue types (clusters).Assumes that each MRI voxel is one of a number of distinct tissue types (clusters). Each cluster has a (multivariate) normal distribution.Each cluster has a (multivariate) normal distribution.. A smooth intensity modulating function can be modelled by a linear combination of DCT basis functions.A smooth intensity modulating function can be modelled by a linear combination of DCT basis functions.

21 . The segmented images contain a little non-brain tissue, which can be automatically removed using morphological operations (erosion followed by conditional dilation). More than one image can be used to produce a multi-spectral classification.

22 Morphometric Measures Voxel-by-voxelVoxel-by-voxel –where are the differences between the populations? –produce an SPM of regional differences Univariate - e.g., Voxel- Based MorphometryUnivariate - e.g., Voxel- Based Morphometry Multivariate - e.g., Tensor- Based MorphometryMultivariate - e.g., Tensor- Based Morphometry Volume basedVolume based –is there a difference between the populations? Multivariate - e.g., Deformation-Based MorphometryMultivariate - e.g., Deformation-Based Morphometry Voxel-by-voxelVoxel-by-voxel –where are the differences between the populations? –produce an SPM of regional differences Univariate - e.g., Voxel- Based MorphometryUnivariate - e.g., Voxel- Based Morphometry Multivariate - e.g., Tensor- Based MorphometryMultivariate - e.g., Tensor- Based Morphometry Volume basedVolume based –is there a difference between the populations? Multivariate - e.g., Deformation-Based MorphometryMultivariate - e.g., Deformation-Based Morphometry MANCOVA & CCA

23 Original image Spatially normalised Partitioned grey matter Smoothed Preparation of images for each subject Voxel-Based Morphometry A voxel by voxel statistical analysis is used to detect regional differences in the amount of grey matter between populations.

24 Deformation-based Morphometry looks at absolute displacements. Tensor-based Morphometry looks at local shapes Morphometric approaches based on deformation fields

25 Deformation-based morphometry Deformation fields... Parameter reduction using principal component analysis (SVD). Multivariate analysis of covariance used to identify differences between groups. Canonical correlation analysis used to characterise differences between groups. Remove positional and size information - leave shape

26 Sex Differences using Deformation-based Morphometry Non-linear warps pertaining to sex differences characterised by canonical variates analysis (above), and mean differences (below, mapping from an average female to male brain). In the transverse and coronal sections, the left side of the brain is on the left side of the figure.

27 If the original Jacobian matrix is donated by A, then this can be decomposed into: A = RU, where R is an orthonormal rotation matrix, and U is a symmetric matrix containing only zooms and shears. TemplateWarpedOriginal Strain tensors are defined that model the amount of distortion. If there is no strain, then tensors are all zero. Generically, the family of Lagrangean strain tensors are given by: (U m -I)/m when m~=0, and log(U) if m==0. Relative volumes Strain tensor Tensor-based morphometry

28 High dimensional warping Millions of parameters are needed for more precise image registration….. Takes a very long time Relative volumes of brain structures can be computed from the determinants of the deformation fields Data From the Dementia Research Group, London, UK.

29 References Friston et al (1995): Spatial registration and normalisation of images. Human Brain Mapping 3(3): Ashburner & Friston (1997): Multimodal image coregistration and partitioning - a unified framework. NeuroImage 6(3): Collignon et al (1995): Automated multi-modality image registration based on information theory. IPMI’95 pp Ashburner et al (1997): Incorporating prior knowledge into image registration. NeuroImage 6(4): Ashburner et al (1999): Nonlinear spatial normalisation using basis functions. Human Brain Mapping 7(4): Ashburner & Friston (2000): Voxel-based morphometry - the methods. NeuroImage 11:


Download ppt "Computational Neuroanatomy John Ashburner SmoothingSmoothing Motion CorrectionMotion Correction Between Modality Co-registrationBetween."

Similar presentations


Ads by Google