Multiple Comparison Correction in SPMs Will Penny SPM short course, Zurich, Feb 2008 Will Penny SPM short course, Zurich, Feb 2008.

Slides:



Advertisements
Similar presentations
Inference on SPMs: Random Field Theory & Alternatives
Advertisements

SPM – introduction & orientation introduction to the SPM software and resources introduction to the SPM software and resources.
Overview of SPM p <0.05 Statistical parametric map (SPM)
The General Linear Model Christophe Phillips Cyclotron Research Centre University of Liège, Belgium SPM Short Course London, May 2011.
Mkael Symmonds, Bahador Bahrami
Random Field Theory Methods for Dummies 2009 Lea Firmin and Anna Jafarpour.
1st level analysis - Design matrix, contrasts & inference
Concepts of SPM data analysis Marieke Schölvinck.
Wellcome Centre for Neuroimaging at UCL
Multiple comparison correction
Statistical Inference and Random Field Theory Will Penny SPM short course, London, May 2003 Will Penny SPM short course, London, May 2003 M.Brett et al.
1st level analysis: basis functions, parametric modulation and correlated regressors. 1 st of February 2012 Sylvia Kreutzer Max-Philipp Stenner Methods.
1 Statistics – Understanding your findings Chris Rorden 1.Modeling data: Signal, Error and Covariates Statistical contrasts 2.Thresholding Results: Statistical.
Statistical Inference and Random Field Theory Will Penny SPM short course, Kyoto, Japan, 2002 Will Penny SPM short course, Kyoto, Japan, 2002.
Multiple comparison correction
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research University of Zurich With many thanks for slides & images to: FIL Methods.
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2014 Many thanks to Justin.
Classical inference and design efficiency Zurich SPM Course 2014
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
07/01/15 MfD 2014 Xin You Tai & Misun Kim
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparison correction Methods & models for fMRI data analysis 18 March 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Comparison of Parametric and Nonparametric Thresholding Methods for Small Group Analyses Thomas Nichols & Satoru Hayasaka Department of Biostatistics U.
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
False Discovery Rate Methods for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan.
Giles Story Philipp Schwartenbeck
Voxel Based Morphometry
Statistical Inference, Multiple Comparisons and Random Field Theory Andrew Holmes SPM short course, May 2002 Andrew Holmes SPM short course, May 2002.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006 David Carmichael.
Basics of fMRI Inference Douglas N. Greve. Overview Inference False Positives and False Negatives Problem of Multiple Comparisons Bonferroni Correction.
With a focus on task-based analysis and SPM12
Random field theory Rumana Chowdhury and Nagako Murase Methods for Dummies November 2010.
Computational Biology Jianfeng Feng Warwick University.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparison correction Methods & models for fMRI data analysis October 2013 With many thanks for slides & images to: FIL Methods group & Tom Nichols.
Multiple comparisons in M/EEG analysis Gareth Barnes Wellcome Trust Centre for Neuroimaging University College London SPM M/EEG Course London, May 2013.
1 Inference on SPMs: Random Field Theory & Alternatives Thomas Nichols, Ph.D. Department of Statistics & Warwick Manufacturing Group University of Warwick.
Methods for Dummies Random Field Theory Annika Lübbert & Marian Schneider.
Thresholding and multiple comparisons
Classical Inference on SPMs Justin Chumbley SPM Course Oct 23, 2008.
Contrasts & Statistical Inference
**please note** Many slides in part 1 are corrupt and have lost images and/or text. Part 2 is fine. Unfortunately, the original is not available, so please.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005.
Random Field Theory Ciaran S Hill & Christian Lambert Methods for Dummies 2008.
Event-related fMRI Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Chicago, Oct 2015.
The False Discovery Rate A New Approach to the Multiple Comparisons Problem Thomas Nichols Department of Biostatistics University of Michigan.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Multiple comparisons problem and solutions James M. Kilner
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2015 With thanks to Justin.
Multiple comparison correction
False Discovery Rate for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan Christopher Genovese & Nicole Lazar.
Statistics Part II John VanMeter, Ph.D. Center for Functional and Molecular Imaging Georgetown University Medical Center.
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
What a Cluster F… ailure!
Topological Inference
2nd Level Analysis Methods for Dummies 2010/11 - 2nd Feb 2011
Inference on SPMs: Random Field Theory & Alternatives
Wellcome Trust Centre for Neuroimaging University College London
Methods for Dummies Random Field Theory
Multiple comparisons in M/EEG analysis
Topological Inference
Contrasts & Statistical Inference
Inference on SPMs: Random Field Theory & Alternatives
Inference on SPMs: Random Field Theory & Alternatives
Inference on SPMs: Random Field Theory & Alternatives
Statistical Parametric Mapping
Contrasts & Statistical Inference
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Contrasts & Statistical Inference
Presentation transcript:

Multiple Comparison Correction in SPMs Will Penny SPM short course, Zurich, Feb 2008 Will Penny SPM short course, Zurich, Feb 2008

realignment & motion correction smoothing normalisation General Linear Model Ümodel fitting Üstatistic image corrected p-values image data parameter estimates design matrix anatomical reference kernel Statistical Parametric Map Random Field Theory

Inference at a single voxel  = p(t>u|H) NULL hypothesis, H: activation is zero u=2 t-distribution We can choose u to ensure a voxel-wise significance level of   his is called an ‘uncorrected’ p-value, for reasons we’ll see later. We can then plot a map of above threshold voxels.

Inference for Images Signal+Noise Noise

Using an ‘uncorrected’ p-value of 0.1 will lead us to conclude on average that 10% of voxels are active when they are not. This is clearly undesirable. To correct for this we can define a null hypothesis for images of statistics.

Family-wise Null Hypothesis FAMILY-WISE NULL HYPOTHESIS: Activation is zero everywhere If we reject a voxel null hypothesis at any voxel, we reject the family-wise Null hypothesis A FP anywhere in the image gives a Family Wise Error (FWE) Family-Wise Error (FWE) rate = ‘corrected’ p-value

Use of ‘uncorrected’ p-value,  =0.1 FWE Use of ‘corrected’ p-value,  =0.1

The Bonferroni correction The Family-Wise Error rate (FWE), ,a family of N independent The Family-Wise Error rate (FWE), , for a family of N independent voxels is α = Nv α = Nv where v is the voxel-wise error rate. Therefore, to ensure a particular FWE set v = α / N BUT...

The Bonferroni correction Independent VoxelsSpatially Correlated Voxels Bonferroni is too conservative for brain images

Random Field Theory Consider a statistic image as a discretisation of a continuous underlying random fieldConsider a statistic image as a discretisation of a continuous underlying random field Use results from continuous random field theoryUse results from continuous random field theory Consider a statistic image as a discretisation of a continuous underlying random fieldConsider a statistic image as a discretisation of a continuous underlying random field Use results from continuous random field theoryUse results from continuous random field theory Discretisation

Euler Characteristic (EC) Topological measure –threshold an image at u -EC = # blobs -at high u: Prob blob = avg (EC) So FWE,  = avg (EC) Topological measure –threshold an image at u -EC = # blobs -at high u: Prob blob = avg (EC) So FWE,  = avg (EC)

Example – 2D Gaussian images α = R (4 ln 2) (2π) -3/2 u exp (-u 2 /2) Voxel-wise threshold, u Number of Resolution Elements (RESELS), R N=100x100 voxels, Smoothness FWHM=10, gives R=10x10=100

Example – 2D Gaussian images α = R (4 ln 2) (2π) -3/2 u exp (-u 2 /2) For R=100 and α=0.05 RFT gives u=3.8

Estimated component fields data matrix design matrix parameters errors + ? =  ? voxels scans Üestimate  ^  residuals estimated component fields parameter estimates estimated variance   = Each row is an estimated component field

Applied Smoothing Smoothness smoothness » voxel size practically FWHM  3  VoxDim Typical applied smoothing: Single Subj fMRI: 6mm PET: 12mm PET: 12mm Multi Subj fMRI: 8-12mm Multi Subj fMRI: 8-12mm PET: 16mm PET: 16mm

SPM results I Activations Significant at Cluster level But not at Voxel Level

SPM results II Activations Significant at Voxel and Cluster level

SPM results...

False Discovery Rate H True (o)TN=7FP=3 H False (x)FN=0TP=10 Don’t Reject ACTION TRUTH u1 FDR=3/13=23%  =3/10=30% At u1 o o o o o o o x x x o o x x x o x x x x Eg. t-scores from regions that truly do and do not activate FDR = FP/(# Reject)  = FP/(# H True)

False Discovery Rate H True (o)TN=9FP=1 H False (x)FN=3TP=7 Don’t Reject ACTION TRUTH u2 o o o o o o o x x x o o x x x o x x x x Eg. t-scores from regions that truly do and do not activate FDR=1/8=13%  =1/10=10% At u2 FDR = FP/(# Reject)  = FP/(# H True)

False Discovery Rate Signal+Noise Noise

SummarySummary We should not use uncorrected p-valuesWe should not use uncorrected p-values We can use Random Field Theory (RFT) to ‘correct’ p-valuesWe can use Random Field Theory (RFT) to ‘correct’ p-values RFT requires FWHM > 3 voxelsRFT requires FWHM > 3 voxels We only need to correct for the volume of interestWe only need to correct for the volume of interest Cluster-level inferenceCluster-level inference False Discovery Rate is a viable alternativeFalse Discovery Rate is a viable alternative We should not use uncorrected p-valuesWe should not use uncorrected p-values We can use Random Field Theory (RFT) to ‘correct’ p-valuesWe can use Random Field Theory (RFT) to ‘correct’ p-values RFT requires FWHM > 3 voxelsRFT requires FWHM > 3 voxels We only need to correct for the volume of interestWe only need to correct for the volume of interest Cluster-level inferenceCluster-level inference False Discovery Rate is a viable alternativeFalse Discovery Rate is a viable alternative