Hidden Process Models with applications to fMRI data Rebecca Hutchinson Oregon State University Joint work with Tom M. Mitchell Carnegie Mellon University.

Slides:



Advertisements
Similar presentations
Bayesian inference Lee Harrison York Neuroimaging Centre 01 / 05 / 2009.
Advertisements

The General Linear Model Or, What the Hell’s Going on During Estimation?
Supervised Learning Recap
Visual Recognition Tutorial
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Assuming normally distributed data! Naïve Bayes Classifier.
Nonparametric Bayes and human cognition Tom Griffiths Department of Psychology Program in Cognitive Science University of California, Berkeley.
1 Hidden Process Models Rebecca Hutchinson Joint work with Tom Mitchell and Indra Rustandi.
Hidden Process Models: Decoding Overlapping Cognitive States with Unknown Timing Rebecca A. Hutchinson Tom M. Mitchell Carnegie Mellon University NIPS.
1 Learning fMRI-Based Classifiers for Cognitive States Stefan Niculescu Carnegie Mellon University April, 2003 Our Group: Tom Mitchell, Luis Barrios, Rebecca.
1 Classifying Instantaneous Cognitive States from fMRI Data Tom Mitchell, Rebecca Hutchinson, Marcel Just, Stefan Niculescu, Francisco Pereira, Xuerui.
Hidden Process Models Rebecca Hutchinson Tom M. Mitchell Indrayana Rustandi October 4, 2006 Women in Machine Learning Workshop Carnegie Mellon University.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Multi-voxel Pattern Analysis (MVPA) and “Mind Reading” By: James Melrose.
Hidden Process Models Rebecca Hutchinson May 26, 2006 Thesis Proposal Carnegie Mellon University Computer Science Department.
Hidden Process Models for Analyzing fMRI Data Rebecca Hutchinson Joint work with Tom Mitchell May 11, 2007 Student Seminar Series In partial fulfillment.
Scalable Text Mining with Sparse Generative Models
Learning Programs Danielle and Joseph Bennett (and Lorelei) 4 December 2007.
Hidden Process Models with Applications to fMRI Data Rebecca A. Hutchinson March 24, 2010 Biostatistics and Biomathematics Seminar Fred Hutchinson Cancer.
Modeling fMRI data generated by overlapping cognitive processes with unknown onsets using Hidden Process Models Rebecca A. Hutchinson (1) Tom M. Mitchell.
Learning to Identify Overlapping and Hidden Cognitive Processes from fMRI Data Rebecca Hutchinson, Tom Mitchell, Indra Rustandi Carnegie Mellon University.
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
Radial Basis Function Networks
TSTAT_THRESHOLD (~1 secs execution) Calculates P=0.05 (corrected) threshold t for the T statistic using the minimum given by a Bonferroni correction and.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
1 Blockwise Coordinate Descent Procedures for the Multi-task Lasso with Applications to Neural Semantic Basis Discovery ICML 2009 Han Liu, Mark Palatucci,
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Current work at UCL & KCL. Project aim: find the network of regions associated with pleasant and unpleasant stimuli and use this information to classify.
Machine Learning for Analyzing Brain Activity Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 2006 Collaborators: Rebecca.
LML Speech Recognition Speech Recognition Introduction I E.M. Bakker.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
MVPD – Multivariate pattern decoding Christian Kaul MATLAB for Cognitive Neuroscience.
Paper Reading Dalong Du Nov.27, Papers Leon Gu and Takeo Kanade. A Generative Shape Regularization Model for Robust Face Alignment. ECCV08. Yan.
Learning to distinguish cognitive subprocesses based on fMRI Tom M. Mitchell Center for Automated Learning and Discovery Carnegie Mellon University Collaborators:
Lecture 2: Statistical learning primer for biologists
C O R P O R A T E T E C H N O L O G Y Information & Communications Neural Computation Machine Learning Methods on functional MRI Data Siemens AG Corporate.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
1 Modeling the fMRI signal via Hierarchical Clustered Hidden Process Models Stefan Niculescu, Tom Mitchell, R. Bharat Rao Siemens Medical Solutions Carnegie.
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Bayesian inference Lee Harrison York Neuroimaging Centre 23 / 10 / 2009.
SPM and (e)fMRI Christopher Benjamin. SPM Today: basics from eFMRI perspective. 1.Pre-processing 2.Modeling: Specification & general linear model 3.Inference:
Mixture Models with Adaptive Spatial Priors Will Penny Karl Friston Acknowledgments: Stefan Kiebel and John Ashburner The Wellcome Department of Imaging.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Bayesian Perception.
Online Multiscale Dynamic Topic Models
Classification of fMRI activation patterns in affective neuroscience
Data Mining Lecture 11.
Hidden Process Models with applications to fMRI data
CSCI 5822 Probabilistic Models of Human and Machine Learning
EE513 Audio Signals and Systems
SPM2: Modelling and Inference
Bayesian Methods in Brain Imaging
School of Computer Science, Carnegie Mellon University
Learning Theory Reza Shadmehr
Speech recognition, machine learning
Machine Learning: Lecture 6
Mixture Models with Adaptive Spatial Priors
Machine Learning: UNIT-3 CHAPTER-1
Probabilistic Modelling of Brain Imaging Data
Mathematical Foundations of BME
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Speech recognition, machine learning
Presentation transcript:

Hidden Process Models with applications to fMRI data Rebecca Hutchinson Oregon State University Joint work with Tom M. Mitchell Carnegie Mellon University August 2, 2009 Joint Statistical Meetings, Washington DC

Introduction Hidden Process Models (HPMs): –A probabilistic model for time series data. –Designed for data generated by a collection of latent processes. Example domain: –Modeling cognitive processes (e.g. making a decision) in functional Magnetic Resonance Imaging time series. Characteristics of potential domains: –Processes with spatial-temporal signatures. –Uncertainty about temporal location of processes. –High-dimensional, sparse, noisy. 2

fMRI Data … Signal Amplitude Time (seconds) Hemodynamic Response Neural activity Features: 5k-15k voxels, imaged every second. Training examples: trials (task repetitions). 3

Study: Pictures and Sentences Task: Decide whether sentence describes picture correctly, indicate with button press. 13 normal subjects, 40 trials per subject. Sentences and pictures describe 3 symbols: *, +, and $, using ‘above’, ‘below’, ‘not above’, ‘not below’. Images are acquired every 0.5 seconds. Read Sentence View PictureRead Sentence View PictureFixation Press Button 4 sec.8 sec.t=0 Rest 4

Goals for fMRI To track cognitive processes over time. –Estimate hemodynamic response signatures. –Estimate process timings. Modeling processes that do not directly correspond to the stimuli timing is a key contribution of HPMs! To compare hypotheses of cognitive behavior. 5

Process 1: ReadSentence Response signature W: Duration d: 11 sec. Offsets  : {0,1} P(  ): {  0,  1 } One configuration c of process instances  1,  2, …  k : Predicted mean: Input stimulus  : 11  Timing landmarks : 2 1 22 Process instance:  2 Process h: 2 Timing landmark: 2 Offset O: 1 (Start time: 2 + O) sentence picture v1 v2 Process 2: ViewPicture Response signature W: Duration d: 11 sec. Offsets  : {0,1} P(  ): {  0,  1 } v1 v2 Processes of the HPM: v1 v2 + N(0,  1 ) + N(0,  2 ) 6

HPM Formalism HPM = H =, a set of processes (e.g. ReadSentence) h =, a process W = response signature d = process duration  = allowable offsets  = multinomial parameters over values in  C =, a set of possible configurations c =, a set of process instances  =, a process instance (e.g. ReadSentence(S1)) h = process ID = timing landmark (e.g. stimulus presentation of S1) O = offset (takes values in  h ) C= a latent variable indicating the correct configuration  =, standard deviation for each voxel 7

HPMs: the graphical model Offset o Process Type h Start Time s observed unobserved Timing Landmark Y t,v  1,…,  k t=[1,T], v=[1,V]  The set C of configurations constrains the joint distribution on {h(k),o(k)}  k. Configuration c 8

Encoding Experiment Design Configuration 1: Input stimulus  : Timing landmarks : 2 1 ViewPicture = 2 ReadSentence = 1 Decide = 3 Configuration 2: Configuration 3: Configuration 4: Constraints Encoded: h(  1 ) = {1,2} h(  2 ) = {1,2} h(  1 ) != h(  2 ) o(  1 ) = 0 o(  2 ) = 0 h(  3 ) = 3 o(  3 ) = {1,2} Processes: 9

Inference Over C, the latent indicator of the correct configuration Choose the most likely configuration, where: Y=observed data,  =input stimuli, HPM=model 10

Learning Parameters to learn: –Response signature W for each process –Timing distribution  for each process –Standard deviation  for each voxel Expectation-Maximization (EM) algorithm to estimate W and . –E step: estimate the probability distribution over C. –M step: update estimates of W (using reweighted least squares), , and  (using standard MLEs) based on the E step. 11

Process Response Signatures Standard: Each process has a matrix of parameters, one for each point in space and time for the duration of the response (e.g. 24). Regularized: Same as standard, but learned with penalties for deviations from temporal and/or spatial smoothness. Basis functions: Each process has a small number (e.g. 3) weights for each voxel that are combined with a basis to get the response. 12

Models HPM-GNB: ReadSentence and ViewPicture, duration=8sec. (no overlap) –an approximation of Gaussian Naïve Bayes classifier, with HPM assumptions and noise model HPM-2: ReadSentence and ViewPicture, duration=12sec. (temporal overlap) HPM-3: HPM-2 + Decide (offsets=[0,7] images following second stimulus) HPM-4: HPM-3 + PressButton (offsets = {-1,0} following button press)

Evaluation Select 1000 most active voxels. Compute improvement in test data log-likelihood as compared with predicting the mean training trial for all test trials (a baseline). 5-fold cross-validation per subject; mean over 13 subjects. StandardRegularizedBasis functions HPM-GNB HPM HPM HPM

Interpretation and Visualization Timing for the third (Decide) process in HPM-3: (Values have been rounded.) For each subject, average response signatures for each voxel over time, plot result in each spatial location. Compare time courses for the same voxel. Offset: Stand Reg Basis

Standard 16

Regularized 17

Basis functions 18

Time courses Standard Regularized Basis functions The basis set (Hossein-Zadeh03) 19

Related Work fMRI –General Linear Model (Dale99) Must assume timing of process onset to estimate hemodynamic response. –Computer models of human cognition (Just99, Anderson04) Predict fMRI data rather than learning parameters of processes from the data. Machine Learning –Classification of windows of fMRI data (overview in Haynes06) Does not typically model overlapping hemodynamic responses. –Dynamic Bayes Networks (Murphy02, Ghahramani97) HPM assumptions/constraints can be encoded by extending factorial HMMs with links between the Markov chains. 20

Conclusions Take-away messages: –HPMs are a probabilistic model for time series data generated by a collection of latent processes. –In the fMRI domain, HPMs can simultaneously estimate the hemodynamic response and localize the timing of cognitive processes. Future work: –Automatically discover the number of latent processes. –Learn process durations. –Apply to open cognitive science problems. 21

References John R. Anderson, Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theory of the mind. Psychological Review, 111(4):1036–1060, Anders M. Dale. Optimal experimental design for event-related fMRI. Human Brain Mapping, 8:109–114, Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. Machine Learning, 29:245–275, John-Dylan Haynes and Geraint Rees. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience, 7:523–534, July Gholam-Ali Hossein-Zadeh, Babak A. Ardekani, and Hamid Soltanian-Zadeh. A signal subspace approach for modeling the hemodynamic response function in fmri. Magnetic Resonance Imaging, 21:835–843, Marcel Adam Just, Patricia A. Carpenter, and Sashank Varma. Computational modeling of high- level cognition and brain function. Human Brain Mapping, 8:128–136, modeling4CAPS.htm. Kevin P. Murphy. Dynamic bayesian networks. To appear in Probabilistic Graphical Models, M. Jordan, November