Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Preprocessing and Motion Correction The bulk of this demonstration will focus on ‘quality control’ measures. Standard processing procedure - Every.

Similar presentations


Presentation on theme: "Data Preprocessing and Motion Correction The bulk of this demonstration will focus on ‘quality control’ measures. Standard processing procedure - Every."— Presentation transcript:

1 Data Preprocessing and Motion Correction The bulk of this demonstration will focus on ‘quality control’ measures. Standard processing procedure - Every imager should attempt to implement some set of criterion in which they can effectively evaluate the quality of their data. FMRI data is inherently very ‘noisy’. Any time you can identify some source of ‘noise’ in your data set you then have a chance to remove or account for the variance in your signal caused by that source of noise. The more ‘unwanted’ variance you can account for the more likely it is that your data will be statistically significant.

2 Sources of Noise in fMRI HSM (ch.9, pp.224-233): Thermal Noise - Fluctuations in MR signal intensity over space or time that are caused by thermal motion of electrons within the sample or the scanner hardware. System Noise - Fluctuations in MR signal intensity over space or time that are caused by imperfect functioning of the scanner hardware. → e.g. scanner drift – Slow changes in voxel intensity over time. Motion and Physiological Noise - Fluctuations in MR signal intensity over space or time due to physiological activity of the human body. Sources of physiological noise include motion, respiration, cardiac activity, and metabolic reactions.

3 How can we tell ‘good’ data from ‘bad’ data? Tool # 1: ‘Voxel Surfing’ - take a moment to ‘snoop’ your data. ‘Bad’ data - look for huge abrupt changes in signal intensity. Transient (‘spiking’): 10769 - 9919 = 850 Sustained: 10759 - 9731 = 1028

4 How can we tell ‘good’ data from ‘bad’ data? Tool # 2: Time Course Movies – look at the image intensity of your data as it changes over time. Slice 1, Time point 60 Slice 1, Time point 61 Slice 1, Time point 62 Slice 1, Time point 63 Large movements of the head from one time point to the next should be evident as sudden shifts in the image intensity. Toggling between the first and last image can give you a rough estimate of head movement over the entire run (note the potential to detect gradual ‘drifts’).

5 How can we tell ‘good’ data from ‘bad’ data? Tool # 3: BV’s Motion Correction – look at the resultant plot of position changes over time as detected by BV. y Translation z Translation x Translation y Rotation (‘Pitch’) z Rotation (‘Yaw’) x Rotation (‘Roll’)

6 How can we tell ‘good’ data from ‘bad’ data? Tool # 4: Converging Evidence – the most powerful method is to ‘search’ for overlapping conclusions. For example, volume 62 is consistently identified as a ‘problem volume’: 1 st piece of evidence: 60 61 62 63 2 nd piece of evidence: 3 rd piece of evidence:

7 But, how bad are these data? Tool # 5: Statistical Contrast Maps – the statistics themselves can often tell you how ‘bad’ the data is. Recall Fig. 10.6 (HSM). Illustrates the characteristic ‘ringing’ pattern of false activation; a tell-tale sign of head motion. Good: Bad: Also, plenty of activation outside the brain is not a good sign ↓

8 From Detection to Correction Motion Correction – the procedure most commonly used to combat head motion is to simply move the brain back to it’s original position in space (i.e. align to a specified reference image). For example, with BVMC “…an intensity-based algorithm searches for three parameters which describe the subpixel translation and rotation of each measurement relative to the reference image…” Coregistration – The spatial alignment of two images or image volumes (HSM ch.10, p.263).

9 These procedures assume that the physical properties of the brain image (i.e. its size and shape) do not change as a function of it’s position in the magnetic field. (see p.263, rigid body transformations) Spatial distortions vary depending upon where in the magnetic field you’re sampling from. These procedures do not account for temporal changes in the signal caused by the motion. The impact of a head movement on the time course of activity within a given voxel/region is not corrected for by simply moving the brain ‘back to where it ought to be’. From Detection to Correction Things to consider:

10 Can it be Done? The Goal – correct for the unwanted variance in our data set associated with periods of head motion. Two ideas about how we might accomplish this task: 1. Remove the noise due to head motion – simply ‘cut’ the ‘problem’ time points out of the data set. 2. Model the noise due to head motion – account for the variance associated with periods of head motion by including regressor functions that accurately predict the changes in the signal caused by the motion.

11 Idea #1 – ‘Cut’ the Noise Things to consider: 1. We can slice up the data simply enough, but are there any issues about ‘piecing’ it back together? For example, lets say that we decided that volumes 60-85 are significantly ‘contaminated’ with head motion → period of motion Hypothetical time course: Before After CUT new problem

12 Idea #1 – ‘Cut’ the Noise Things to consider: 2. Loss of Statistical Power – the most obvious problem with this method is that since we are removing data points we are directly reducing our statistical power. This can be a serious concern when the effects of the experimental manipulation are expected to be very small. For example, studies employing fMR-adaptation paradigms usually report significant differences of less then.1% → every data point is valuable !!

13 Idea #2 – Model the Noise The success of the GLM depends entirely on how accurately the model ‘fits’ the actual data (see HSM ch12, pp.337-343). Variance in the data that is not explained by the model is referred to as the residuals. The ultimate goal – account for the variance due to motion = reduce our residuals -4 -3 -2 0 1 2 3 4 121416181101121 Predict the signal Use the motion to ↓

14 Idea #2 – Model the Noise Independent Component Analysis (ICA) – a data-driven approach that explores the underlying statistical structure of a given data set. ICA can be used to reveal non-task related components within your data set. As such, these confounding components of no interest can then be incorporated into your hypothesis-driven model so to reduce the amount of ‘noise’ in your data. Suggested Readings: McKeown et al. (2003). Independent component analysis of functional MRI: what is signal and what is noise? Current Opinion in Neurobiology, 13, 620-629. Thomas et al. (2002). Noise reduction in BOLD-based fMRI using component analysis. NeuroImage, 17, 1521-1537. Mckeown (2000). Detection of consistently task-related activations in fMRI data with hybrid independent component analysis. NeuroImage, 11, 24-35.

15 Potential Problems with Both of These Ideas 1. They can be very time consuming. 2. Separating meaningful signal from nonsense noise is often a difficult process. There are many unexplained components of variance within any fMRI data set. 3. There are definitely situations where confounding noise components actually overlap with task events. When in doubt, throw it out! To model these components so to increase the statistical significance of a given comparison of interest could easily be considered cheating! Thus, in order to use these methods appropriately one must be certain that the noise components they wish to remove are not in any way associated with the cognitive processes they wish to investigate…

16 Spatial and Temporal Preprocessing “In neuroimaging, filters are used to remove uninteresting variation in the data that can be safely attributed to noise sources, while preserving signals of interest.” HSM (ch.10, pp.274-280):

17 Temporal Filtering Low-Pass Filter – allows low frequency trends in the data to ‘pass’ (i.e. remain) while attenuating high frequency trends. High-Pass Filter – allows high frequency trends in the data to ‘pass’ (i.e. remain) while attenuating low frequency trends. You can end up completely ‘squashing’ your effects of interest! High frequency variations in fMRI data are likely to be meaningful. Thus, although temporal data smoothing can be useful, it should approached with caution – Ask yourself: ‘Do I really know what I’m doing?’ Because if you don’t, you could be removing meaningful patterns of activation.

18 Spatial Filtering Spatial smoothing essentially ‘blurs’ your functional data. Why would you ever want to reduce the spatial resolution of your data? Spatial smoothing may often be required when averaging data across several subjects due to the individual variations in brain anatomy and functional organization.


Download ppt "Data Preprocessing and Motion Correction The bulk of this demonstration will focus on ‘quality control’ measures. Standard processing procedure - Every."

Similar presentations


Ads by Google