Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental.

Similar presentations


Presentation on theme: "Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental."— Presentation transcript:

1 Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental Design Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental Design

2 Spatial Preprocessing & Computational Neuroanatomy With thanks to: John Ashburner, Jesper Andersson Spatial Preprocessing & Computational Neuroanatomy With thanks to: John Ashburner, Jesper Andersson

3 OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear Model Design matrix Parameter Estimates

4 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

5 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

6 Reasons for Motion Correction Subjects will always move in the scannerSubjects will always move in the scanner The sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivityThe sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivity However, subject movement may also correlate with the task…However, subject movement may also correlate with the task… …in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion)…in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion) Subjects will always move in the scannerSubjects will always move in the scanner The sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivityThe sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivity However, subject movement may also correlate with the task…However, subject movement may also correlate with the task… …in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion)…in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion) Realignment (of same-modality images from same subject) involves two stages:Realignment (of same-modality images from same subject) involves two stages: –1. Registration - determining the 6 parameters that describe the rigid body transformation between each image and a reference image –2. Transformation (reslicing) - re-sampling each image according to the determined transformation parameters Realignment (of same-modality images from same subject) involves two stages:Realignment (of same-modality images from same subject) involves two stages: –1. Registration - determining the 6 parameters that describe the rigid body transformation between each image and a reference image –2. Transformation (reslicing) - re-sampling each image according to the determined transformation parameters

7 1. Registration Determine the rigid body transformation that minimises the sum of squared difference between imagesDetermine the rigid body transformation that minimises the sum of squared difference between images Rigid body transformation is defined by:Rigid body transformation is defined by: –3 translations - in X, Y & Z directions –3 rotations - about X, Y & Z axes Operations can be represented as affine transformation matrices:Operations can be represented as affine transformation matrices: x 1 = m 1,1 x 0 + m 1,2 y 0 + m 1,3 z 0 + m 1,4 y 1 = m 2,1 x 0 + m 2,2 y 0 + m 2,3 z 0 + m 2,4 z 1 = m 3,1 x 0 + m 3,2 y 0 + m 3,3 z 0 + m 3,4 Determine the rigid body transformation that minimises the sum of squared difference between imagesDetermine the rigid body transformation that minimises the sum of squared difference between images Rigid body transformation is defined by:Rigid body transformation is defined by: –3 translations - in X, Y & Z directions –3 rotations - about X, Y & Z axes Operations can be represented as affine transformation matrices:Operations can be represented as affine transformation matrices: x 1 = m 1,1 x 0 + m 1,2 y 0 + m 1,3 z 0 + m 1,4 y 1 = m 2,1 x 0 + m 2,2 y 0 + m 2,3 z 0 + m 2,4 z 1 = m 3,1 x 0 + m 3,2 y 0 + m 3,3 z 0 + m 3,4 TranslationsPitchRollYaw Rigid body transformations parameterised by: Squared Error

8 1. Registration Iterative procedure (Gauss- Newton ascent)Iterative procedure (Gauss- Newton ascent) Additional scaling parameterAdditional scaling parameter Nx6 matrix of realignment parameters written to file (N is number of scans)Nx6 matrix of realignment parameters written to file (N is number of scans) Orientation matrices in *.mat file updated for each volume (do not have to be resliced)Orientation matrices in *.mat file updated for each volume (do not have to be resliced) Slice-timing correction can be performed before or after realignment (depending on acquisition)Slice-timing correction can be performed before or after realignment (depending on acquisition) Iterative procedure (Gauss- Newton ascent)Iterative procedure (Gauss- Newton ascent) Additional scaling parameterAdditional scaling parameter Nx6 matrix of realignment parameters written to file (N is number of scans)Nx6 matrix of realignment parameters written to file (N is number of scans) Orientation matrices in *.mat file updated for each volume (do not have to be resliced)Orientation matrices in *.mat file updated for each volume (do not have to be resliced) Slice-timing correction can be performed before or after realignment (depending on acquisition)Slice-timing correction can be performed before or after realignment (depending on acquisition)

9 Application of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxelsApplication of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxels Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines”Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines” Application of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxelsApplication of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxels Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines”Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines” 2. Transformation (reslicing) Nearest Neighbour Linear Full sinc (no alias) Windowed sinc

10 Interpolation errors, especially with tri-linear interpolation and small-window sincInterpolation errors, especially with tri-linear interpolation and small-window sinc PET:PET: –Incorrect attenuation correction because scans are no longer aligned with transmission scan (a transmission scan is often acquired to give a map of local positron attenuation) fMRI (EPI):fMRI (EPI): –Ghosts (and other artefacts) in the image (which do not move as a rigid body) –Rapid movements within a scan (which cause non-rigid image deformation) –Spin excitation history effects (residual magnetisation effects of previous scans) –Interaction between movement and local field inhomogeniety, giving non-rigid distortion Interpolation errors, especially with tri-linear interpolation and small-window sincInterpolation errors, especially with tri-linear interpolation and small-window sinc PET:PET: –Incorrect attenuation correction because scans are no longer aligned with transmission scan (a transmission scan is often acquired to give a map of local positron attenuation) fMRI (EPI):fMRI (EPI): –Ghosts (and other artefacts) in the image (which do not move as a rigid body) –Rapid movements within a scan (which cause non-rigid image deformation) –Spin excitation history effects (residual magnetisation effects of previous scans) –Interaction between movement and local field inhomogeniety, giving non-rigid distortion Residual Errors after Realignment

11 Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction)Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction) They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox)They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox) (Note that susceptibility artifacts that cause drop-out are more difficult to correct)(Note that susceptibility artifacts that cause drop-out are more difficult to correct) However, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in fieldHowever, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in field This movement-by-distortion can be accommodated during realignment using “unwarp”This movement-by-distortion can be accommodated during realignment using “unwarp” Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction)Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction) They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox)They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox) (Note that susceptibility artifacts that cause drop-out are more difficult to correct)(Note that susceptibility artifacts that cause drop-out are more difficult to correct) However, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in fieldHowever, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in field This movement-by-distortion can be accommodated during realignment using “unwarp”This movement-by-distortion can be accommodated during realignment using “unwarp” UnwarpUnwarp New in SPM2 Distorted image Corrected imageField-map

12 One could include the movement parameters as confounds in the statistical model of activationsOne could include the movement parameters as confounds in the statistical model of activations However, this may remove activations of interest if they are correlated with the movementHowever, this may remove activations of interest if they are correlated with the movement Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)…Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)… … using Taylor expansion (about mean realigned image):… using Taylor expansion (about mean realigned image): Iterate: 1) estimate movement parameters (), 2) estimate deformation fields, 1) re-estimate movement …Iterate: 1) estimate movement parameters ( ,  ), 2) estimate deformation fields, 1) re-estimate movement … Fields expressed by spatial basis functions (3D discrete cosine set)…Fields expressed by spatial basis functions (3D discrete cosine set)… One could include the movement parameters as confounds in the statistical model of activationsOne could include the movement parameters as confounds in the statistical model of activations However, this may remove activations of interest if they are correlated with the movementHowever, this may remove activations of interest if they are correlated with the movement Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)…Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)… … using Taylor expansion (about mean realigned image):… using Taylor expansion (about mean realigned image): Iterate: 1) estimate movement parameters (), 2) estimate deformation fields, 1) re-estimate movement …Iterate: 1) estimate movement parameters ( ,  ), 2) estimate deformation fields, 1) re-estimate movement … Fields expressed by spatial basis functions (3D discrete cosine set)…Fields expressed by spatial basis functions (3D discrete cosine set)… UnwarpUnwarp New in SPM2 Roll Pitch Estimated derivative fields  +  B0 B0 

13 UnwarpUnwarp  B 0 {i} B0B0 = +  +  + error (0 th -order term can be determined from fieldmap) - f1f1 fifi  11 +2+2 +... +  5 +... ii  ii  ii  New in SPM2

14 UnwarpUnwarp New in SPM2 Example: Movement correlated with design t max =13.38 No correction t max =5.06 Correction by covariation t max =9.57 Correction by Unwarp

15 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

16 Reasons for Normalisation Inter-subject averagingInter-subject averaging –extrapolate findings to the population as a whole –increase statistical power above that obtained from single subject Reporting of activations as co-ordinates within a standard stereotactic spaceReporting of activations as co-ordinates within a standard stereotactic space –e.g. the space described by Talairach & Tournoux Inter-subject averagingInter-subject averaging –extrapolate findings to the population as a whole –increase statistical power above that obtained from single subject Reporting of activations as co-ordinates within a standard stereotactic spaceReporting of activations as co-ordinates within a standard stereotactic space –e.g. the space described by Talairach & Tournoux Label-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are alignedLabel-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are aligned –but few readily identifiable landmarks (and manually defined?) Intensity-based approaches: Warp to images to maximise some voxel-wise similarity measureIntensity-based approaches: Warp to images to maximise some voxel-wise similarity measure –eg, squared error, assuming intensity correspondence (within-modality) Normalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothingNormalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothing Label-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are alignedLabel-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are aligned –but few readily identifiable landmarks (and manually defined?) Intensity-based approaches: Warp to images to maximise some voxel-wise similarity measureIntensity-based approaches: Warp to images to maximise some voxel-wise similarity measure –eg, squared error, assuming intensity correspondence (within-modality) Normalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothingNormalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothing

17 SummarySummary Spatial Normalisation Original image Template image Spatially normalised Deformation field Determine transformation that minimises the sum of squared difference between an image and a (combination of) template image(s) Two stages: 1. affine registration to match size and position of the images 2. non-linear warping to match the overall brain shape Uses a Bayesian framework to constrain affine and warps Determine transformation that minimises the sum of squared difference between an image and a (combination of) template image(s) Two stages: 1. affine registration to match size and position of the images 2. non-linear warping to match the overall brain shape Uses a Bayesian framework to constrain affine and warps

18 Stage 1. Full Affine Transformation The first part of normalisation is a 12 parameter affine transformationThe first part of normalisation is a 12 parameter affine transformation –3 translations –3 rotations –3 zooms –3 shears Better if template image in same modality (eg because of image distortions in EPI but not T1)Better if template image in same modality (eg because of image distortions in EPI but not T1) The first part of normalisation is a 12 parameter affine transformationThe first part of normalisation is a 12 parameter affine transformation –3 translations –3 rotations –3 zooms –3 shears Better if template image in same modality (eg because of image distortions in EPI but not T1)Better if template image in same modality (eg because of image distortions in EPI but not T1) Rotation TranslationZoom Shear Rigid body

19 Six affine registered images Six affine + nonlinear registered Insufficieny of Affine-only normalisation

20 Stage 2. Nonlinear Warps Stage 2. Nonlinear Warps Deformations consist of a linear combination of smooth basis imagesDeformations consist of a linear combination of smooth basis images These are the lowest frequency basis images of a 3-D discrete cosine transformThese are the lowest frequency basis images of a 3-D discrete cosine transform Brain masks can be applied (eg for lesions)Brain masks can be applied (eg for lesions) Deformations consist of a linear combination of smooth basis imagesDeformations consist of a linear combination of smooth basis images These are the lowest frequency basis images of a 3-D discrete cosine transformThese are the lowest frequency basis images of a 3-D discrete cosine transform Brain masks can be applied (eg for lesions)Brain masks can be applied (eg for lesions)

21 Affine Registration (  2 = 472.1) Affine Registration (  2 = 472.1) Template image Template image Non-linear registration without regularisation (  2 = 287.3) Non-linear registration without regularisation (  2 = 287.3) Non-linear registration with regularisation (  2 = 302.7) Non-linear registration with regularisation (  2 = 302.7) Without the Bayesian formulation, the non-linear spatial normalisation can introduce unnecessary warping into the spatially normalised images Bayesian Constraints

22 Using Bayes rule, we can constrain (“regularise”) the nonlinear fit by incorporating prior knowledge of the likely extent of deformations: p(p|e)  p(e|p) p(p) (Bayes Rule) p(p|e) is the a posteriori probability of parameters p given errors e p(e|p) is the likelihood of observing errors e given parameters p p(p) is the a priori probability of parameters p For Maximum a posteriori (MAP) estimate, we minimise (taking logs): H(p|e)  H(e|p) + H(p) (Gibbs potential) H(e|p) (-log p(e|p)) is the squared difference between the images (error) H(p)  -log p(p)) constrains parameters (penalises unlikely deformations) is “regularisation” hyperparameter, weighting effect of “priors” Bayesian Constraints

23 Algorithm simultaneously minimises:Algorithm simultaneously minimises: –Sum of squared difference between template and object –Squared distance between the parameters and their expectation Bayesian constraints applied to both:Bayesian constraints applied to both: 1) affine transformations 1) affine transformations –based on empirical prior ranges 2) nonlinear deformations 2) nonlinear deformations –based on smoothness constraint (minimising membrane energy) Algorithm simultaneously minimises:Algorithm simultaneously minimises: –Sum of squared difference between template and object –Squared distance between the parameters and their expectation Bayesian constraints applied to both:Bayesian constraints applied to both: 1) affine transformations 1) affine transformations –based on empirical prior ranges 2) nonlinear deformations 2) nonlinear deformations –based on smoothness constraint (minimising membrane energy) Empirically generated priors Bayesian Constraints

24 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

25 Reasons for Smoothing Potentially increase signal to noise (matched filter theorem)Potentially increase signal to noise (matched filter theorem) Inter-subject averaging(allowing for residual differences after normalisation)Inter-subject averaging(allowing for residual differences after normalisation) Increase validity of statistics (more likely that errors distributed normally)Increase validity of statistics (more likely that errors distributed normally) Potentially increase signal to noise (matched filter theorem)Potentially increase signal to noise (matched filter theorem) Inter-subject averaging(allowing for residual differences after normalisation)Inter-subject averaging(allowing for residual differences after normalisation) Increase validity of statistics (more likely that errors distributed normally)Increase validity of statistics (more likely that errors distributed normally) Gaussian smoothing kernel Kernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a GaussianKernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a Gaussian Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements)Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements) Kernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a GaussianKernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a Gaussian Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements)Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements) FWHM

26 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

27 Between Modality Co-registration Because different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between imagesBecause different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between images Two main approaches:Two main approaches: I. Via Templates: I. Via Templates: 1) Simultaneous affine registrations between each image and same-modality template 2) Segmentation into grey and white matter 3) Final simultaneous registration of segments II. Mutual Information II. Mutual Information Because different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between imagesBecause different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between images Two main approaches:Two main approaches: I. Via Templates: I. Via Templates: 1) Simultaneous affine registrations between each image and same-modality template 2) Segmentation into grey and white matter 3) Final simultaneous registration of segments II. Mutual Information II. Mutual Information EPI T2 T1Transm PDPET Useful, for example, to display functional results (EPI) onto high resolution anatomical image (T1)Useful, for example, to display functional results (EPI) onto high resolution anatomical image (T1)

28 3. Registration of Partitions 1. Affine Registrations Both images are registered - using 12 parameter affine transformations - to their corresponding templates...Both images are registered - using 12 parameter affine transformations - to their corresponding templates... … but only the rigid-body transformation parameters allowed to differ between the two registrations… but only the rigid-body transformation parameters allowed to differ between the two registrations This gives:This gives: –rigid body mapping between the images –affine mappings between the images and the templates Both images are registered - using 12 parameter affine transformations - to their corresponding templates...Both images are registered - using 12 parameter affine transformations - to their corresponding templates... … but only the rigid-body transformation parameters allowed to differ between the two registrations… but only the rigid-body transformation parameters allowed to differ between the two registrations This gives:This gives: –rigid body mapping between the images –affine mappings between the images and the templates 2. Segmentation ‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF Additional information is obtained from a priori probability images - see laterAdditional information is obtained from a priori probability images - see later ‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF Additional information is obtained from a priori probability images - see laterAdditional information is obtained from a priori probability images - see later Between Modality Co-registration: I. Via Templates Grey and white matter partitions are registered using a rigid body transformationGrey and white matter partitions are registered using a rigid body transformation Simultaneously minimise sum of squared differenceSimultaneously minimise sum of squared difference Grey and white matter partitions are registered using a rigid body transformationGrey and white matter partitions are registered using a rigid body transformation Simultaneously minimise sum of squared differenceSimultaneously minimise sum of squared difference

29 Between Modality Coregistration: II. Mutual Information Between Modality Coregistration: II. Mutual Information PETT1 MRI Another way is to maximise the Mutual Information in the 2D histogram (plot of one image against other) For histograms normalised to integrate to unity, the Mutual Information is:  i  j h ij log h ij  k h ik  l h lj  k h ik  l h lj Another way is to maximise the Mutual Information in the 2D histogram (plot of one image against other) For histograms normalised to integrate to unity, the Mutual Information is:  i  j h ij log h ij  k h ik  l h lj  k h ik  l h lj New in SPM2

30 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

31 Image Segmentation Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF)Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) ‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution Further Bayesian constraints from prior probability images, which are overlaidFurther Bayesian constraints from prior probability images, which are overlaid Additional correction for intensity inhomogeniety possibleAdditional correction for intensity inhomogeniety possible Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF)Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) ‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution Further Bayesian constraints from prior probability images, which are overlaidFurther Bayesian constraints from prior probability images, which are overlaid Additional correction for intensity inhomogeniety possibleAdditional correction for intensity inhomogeniety possible. Intensity histogram fit by multi-Gaussians Priors: Image: Brain/skullCSFWMGM

32 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

33 Morphometry (Computational Neuroanatomy) Voxel-by-voxel: where are the differences between populations?Voxel-by-voxel: where are the differences between populations? –Univariate: e.g, Voxel-Based Morphometry (VBM) –Multivariate: e.g, Tensor-Based Morphometry (TBM) Volume-based: is there a difference between populations?Volume-based: is there a difference between populations? –Multivariate: e.g, Deformation- Based Morphometry (DBM) Continuum:Continuum: –perfect normalisation => all information in Deformation field (no VBM differences) –no normalisation => all in VBM Voxel-by-voxel: where are the differences between populations?Voxel-by-voxel: where are the differences between populations? –Univariate: e.g, Voxel-Based Morphometry (VBM) –Multivariate: e.g, Tensor-Based Morphometry (TBM) Volume-based: is there a difference between populations?Volume-based: is there a difference between populations? –Multivariate: e.g, Deformation- Based Morphometry (DBM) Continuum:Continuum: –perfect normalisation => all information in Deformation field (no VBM differences) –no normalisation => all in VBM Spatial Normalisation OriginalTemplate Normalised Deformation field VBM TBM DBM

34 Original image Spatially normalised Segmented grey matter Smoothed “Optimised” VBM involves segmenting images before normalising, so as to normalise gray matter / white matter / CSF separately... A voxel by voxel statistical analysis is used to detect regional differences in the amount of grey matter between populations Voxel-Based Morphometry (VBM) SPM

35 Affine registration Apply deformation Segmentation & Extraction Affine transform Segmentation & extraction Spatial normalisation priors Modulation smooth STATS volume STATS concentration template Normalised T1 T1 image Optimised VBM

36 Grey matter volume loss with age superior parietal pre and post central insula cingulate/parafalcine VBM Examples: Aging

37 Males > FemalesFemales > Males L superior temporal sulcus R middle temporal gyrus intraparietal sulci mesial temporal temporal pole anterior cerebellar VBM Examples: Sex Differences

38 Right frontal and left occipital petalia VBM Examples: Brain Asymmetries

39 Deformation-based Morphometry looks at absolute displacements Tensor-based Morphometry looks at local shapes Morphometry on deformation fields: DBM/TBM Vector fieldTensor field

40 Deformation fields... Parameter reduction using principal component analysis (SVD) Multivariate analysis of covariance used to identify differences between groups Canonical correlation analysis used to characterise differences between groups Remove positional and size information - leave shape Deformation-based Morphometry (DBM)

41 Non-linear warps of sex differences characterised by canonical variates analysis Mean differences (mapping from an average female to male brain) DBM Example: Sex Differences

42 If the original Jacobian matrix is donated by A, then this can be decomposed into: A = RU, where R is an orthonormal rotation matrix, and U is a symmetric matrix containing only zooms and shears. TemplateWarpedOriginal Strain tensors are defined that model the amount of distortion. If there is no strain, then tensors are all zero. Generically, the family of Lagrangean strain tensors are given by: (U m -I)/m when m~=0, and log(U) if m==0. Relative volumes Strain tensor Tensor-based morphometry

43 References Friston et al (1995): Spatial registration and normalisation of images. Human Brain Mapping 3(3):165-189 Ashburner & Friston (1997): Multimodal image coregistration and partitioning - a unified framework. NeuroImage 6(3):209-217 Collignon et al (1995): Automated multi-modality image registration based on information theory. IPMI’95 pp 263-274 Ashburner et al (1997): Incorporating prior knowledge into image registration. NeuroImage 6(4):344-352 Ashburner et al (1999): Nonlinear spatial normalisation using basis functions. Human Brain Mapping 7(4):254-266 Ashburner & Friston (2000): Voxel-based morphometry - the methods. NeuroImage 11:805-821

44


Download ppt "Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental."

Similar presentations


Ads by Google