Download presentation
Presentation is loading. Please wait.
Published byBarbara George Modified over 9 years ago
1
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006 David Carmichael MfD 2006
3
realignment & motion correction smoothing normalisation General Linear Model Ümodel fitting Üstatistic image corrected p-values image data parameter estimates design matrix anatomical reference kernel Statistical Parametric Map Random Field Theory
4
OverviewOverview 1.Terminology 2.Random Field Theory 3.Cluster level inference 4.SPM Results 5.FDR 1.Terminology 2.Random Field Theory 3.Cluster level inference 4.SPM Results 5.FDR
5
OverviewOverview 1.Terminology 2.Random Field Theory 3.Cluster level inference 4.SPM Results 5.FDR 1.Terminology 2.Random Field Theory 3.Cluster level inference 4.SPM Results 5.FDR
6
Inference at a single voxel = p(t>u|H) NULL hypothesis, H: activation is zero u=2 t-distribution p-value: probability of getting a value of t at least as extreme as u. If is small we reject the null hypothesis. u=(effect size)/std(effect size)
7
Sensitivity and Specificity H True TNFP H False FNTP Don’t Reject ACTION TRUTH Specificity = TN/(# H True) = TN/(TN+FP)= 1 - Sensitivity = TP/(# H False) = TP/(TP+FN)= power = FP/(# H True) = FP/(TN+FP) = p-value/FP rate/sig level
8
Sensitivity and Specificity H True (o)TN=7FP=3 H False (x)FN=0TP=10 Don’t Reject ACTION TRUTH Spec=7/10=70% Sens=10/10=100% At u1 Eg. t-scores from regions that truly do and do not activate Specificity = TN/(# H True) Sensitivity = TP/(# H False) o o o o o o o x x x o o x x x o x x x x u1
9
Sensitivity and Specificity H True (o)TN=9FP=1 H False (x)FN=3TP=7 Don’t Reject ACTION TRUTH Eg. t-scores from regions that truly do and do not activate Spec=9/10=90% Sens=7/10=70% At u2 Specificity = TN/(# H True) Sensitivity = TP/(# H False) o o o o o o o x x x o o x x x o x x x x u2
10
Inference at a single voxel = p(t>u|H) NULL hypothesis, H: activation is zero u=2 t-distribution We can choose u to ensure a voxel-wise significance level of his is called an ‘uncorrected’ p-value, for reasons we’ll see later. We can then plot a map of above threshold voxels.
11
Inference for Images Signal+Noise Noise
12
Using an ‘uncorrected’ p-value of 0.1 will lead us to conclude on average that 10% of voxels are active when they are not. This is clearly undesirable. To correct for this we can define a null hypothesis for images of statistics.
13
Family-wise Null Hypothesis FAMILY-WISE NULL HYPOTHESIS: Activation is zero everywhere If we reject a voxel null hypothesis at any voxel, we reject the family-wise Null hypothesis A FP anywhere in the image gives a Family Wise Error (FWE) Family-Wise Error (FWE) rate = ‘corrected’ p-value
14
Use of ‘uncorrected’ p-value, =0.1 FWE Use of ‘corrected’ p-value, =0.1
15
The Bonferroni correction The Family-Wise Error rate (FWE), ,a family of N independent The Family-Wise Error rate (FWE), , for a family of N independent voxels is α = Nv α = Nv where v is the voxel-wise error rate. Therefore, to ensure a particular FWE set v = α / N BUT...
16
The Bonferroni correction Assume Independent Voxels
17
Independent voxels - a good assumption?? Voxel Point Spread Function (PSF) - continuous signal is sampled for a discrete period - imposes a filter that when FT’d gives a PSF - Gives spread of signal through the image from point source..worse in PET Physiological noise Smoothing Normalisation
18
The Bonferroni correction Independent VoxelsSpatially Correlated Voxels Bonferroni is too conservative for brain images
19
Random Field Theory Consider a statistic image as a discretisation of a continuous underlying random fieldConsider a statistic image as a discretisation of a continuous underlying random field Use results from continuous random field theoryUse results from continuous random field theory Consider a statistic image as a discretisation of a continuous underlying random fieldConsider a statistic image as a discretisation of a continuous underlying random field Use results from continuous random field theoryUse results from continuous random field theory Discretisation
20
OverviewOverview 1.Terminology 2.Random Field Theory 3.Cluster level inference 4.SPM Results 5.FDR 1.Terminology 2.Random Field Theory 3.Cluster level inference 4.SPM Results 5.FDR
21
Euler Characteristic (EC) Topological measure –threshold an image at u -EC = # blobs -at high u: Prob blob = avg (EC) So FWE, = avg (EC) Topological measure –threshold an image at u -EC = # blobs -at high u: Prob blob = avg (EC) So FWE, = avg (EC)
22
Example – 2D Gaussian images α = R (4 ln 2) (2π) -3/2 u exp (-u 2 /2) Voxel-wise threshold, u Number of Resolution Elements (RESELS), R N=100x100 voxels, Smoothness FWHM=10, gives R=10x10=100
23
Example – 2D Gaussian images α = R (4 ln 2) (2π) -3/2 u exp (-u 2 /2) For R=100 and α=0.05 RFT gives u=3.8
24
How do we know number of resels? 1.We can simply use the FWHM of the smoothing kernel But processes such as normalisation mean smoothness will vary 2. Estimate the FWHM at each voxel using residuals at each voxel (worsley 1998)
25
Resel Counts for Brain Structures FWHM=20mm (1) Threshold depends on Search Volume (2) Surface area makes a large contribution volume Surface area diameter Euler # of space
26
OverviewOverview 1.Terminology 2.Theory 3.Imaging Data 4.Levels of Inference 5. SPM Results 1.Terminology 2.Theory 3.Imaging Data 4.Levels of Inference 5. SPM Results
27
Applied Smoothing Smoothness smoothness » voxel size practically FWHM 3 VoxDim Typical applied smoothing: Single Subj fMRI: 6mm PET: 12mm PET: 12mm Multi Subj fMRI: 8-12mm Multi Subj fMRI: 8-12mm PET: 16mm PET: 16mm
28
OverviewOverview 1.Terminology 2.Theory 3.Imaging Data 4.Levels of Inference 5. SPM Results 1.Terminology 2.Theory 3.Imaging Data 4.Levels of Inference 5. SPM Results
29
Cluster Level Inference We can increase sensitivity by trading off anatomical specificityWe can increase sensitivity by trading off anatomical specificity Given a voxel level threshold u, we can computeGiven a voxel level threshold u, we can compute the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels CLUSTER-LEVEL INFERENCE CLUSTER-LEVEL INFERENCE Similarly, we can compute the likelihood of getting cSimilarly, we can compute the likelihood of getting c clusters each having at least n voxels clusters each having at least n voxels SET-LEVEL INFERENCE SET-LEVEL INFERENCE We can increase sensitivity by trading off anatomical specificityWe can increase sensitivity by trading off anatomical specificity Given a voxel level threshold u, we can computeGiven a voxel level threshold u, we can compute the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels CLUSTER-LEVEL INFERENCE CLUSTER-LEVEL INFERENCE Similarly, we can compute the likelihood of getting cSimilarly, we can compute the likelihood of getting c clusters each having at least n voxels clusters each having at least n voxels SET-LEVEL INFERENCE SET-LEVEL INFERENCE
30
Levels of inference set-level P(c 3 | n 12, u 3.09) = 0.019 cluster-level P(c 1 | n 82, t 3.09) = 0.029 (corrected) n=82 n=32 n=1 2 voxel-level P(c 1 | n > 0, t 4.37) = 0.048 (corrected) At least one cluster with unspecified number of voxels above threshold At least one cluster with at least 82 voxels above threshold At least 3 clusters above threshold
31
OverviewOverview 1.Terminology 2.Theory 3.Imaging Data 4.Levels of Inference 5. SPM Results 1.Terminology 2.Theory 3.Imaging Data 4.Levels of Inference 5. SPM Results
32
SPM results I Activations Significant at Cluster level But not at Voxel Level
34
SPM results II Activations Significant at Voxel and Cluster level
35
SPM results...
36
False Discovery Rate H True (o)TN=7FP=3 H False (x)FN=0TP=10 Don’t Reject ACTION TRUTH u1 FDR=3/13=23% =3/10=30% At u1 o o o o o o o x x x o o x x x o x x x x Eg. t-scores from regions that truly do and do not activate FDR = FP/(# Reject) = FP/(# H True)
37
False Discovery Rate H True (o)TN=9FP=1 H False (x)FN=3TP=7 Don’t Reject ACTION TRUTH u2 o o o o o o o x x x o o x x x o x x x x Eg. t-scores from regions that truly do and do not activate FDR=1/8=13% =1/10=10% At u2 FDR = FP/(# Reject) = FP/(# H True)
38
False Discovery Rate Signal+Noise Noise
40
SummarySummary We should not use uncorrected p-valuesWe should not use uncorrected p-values We can use Random Field Theory (RFT) to ‘correct’ p-valuesWe can use Random Field Theory (RFT) to ‘correct’ p-values RFT requires FWHM > 3 voxelsRFT requires FWHM > 3 voxels We only need to correct for the volume of interestWe only need to correct for the volume of interest Cluster-level inferenceCluster-level inference False Discovery Rate is a viable alternativeFalse Discovery Rate is a viable alternative We should not use uncorrected p-valuesWe should not use uncorrected p-values We can use Random Field Theory (RFT) to ‘correct’ p-valuesWe can use Random Field Theory (RFT) to ‘correct’ p-values RFT requires FWHM > 3 voxelsRFT requires FWHM > 3 voxels We only need to correct for the volume of interestWe only need to correct for the volume of interest Cluster-level inferenceCluster-level inference False Discovery Rate is a viable alternativeFalse Discovery Rate is a viable alternative
41
Functional Imaging Data The Random Fields are the component fields,The Random Fields are the component fields, Y = Xw +E, e=E/σ Y = Xw +E, e=E/σ We can only estimate the component fields, usingWe can only estimate the component fields, using estimates of w and σ estimates of w and σ To apply RFT we need the RESEL count which requires smoothness estimatesTo apply RFT we need the RESEL count which requires smoothness estimates The Random Fields are the component fields,The Random Fields are the component fields, Y = Xw +E, e=E/σ Y = Xw +E, e=E/σ We can only estimate the component fields, usingWe can only estimate the component fields, using estimates of w and σ estimates of w and σ To apply RFT we need the RESEL count which requires smoothness estimatesTo apply RFT we need the RESEL count which requires smoothness estimates
42
Estimated component fields data matrix design matrix parameters errors + ? = ? voxels scans Üestimate ^ residuals estimated component fields parameter estimates estimated variance = Each row is an estimated component field
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.