1 Preliminary Experiments: Learning Virtual Sensors Machine learning approach: train classifiers –fMRI(t, t+  )  CognitiveState Fixed set of possible.

Slides:



Advertisements
Similar presentations
Naïve-Bayes Classifiers Business Intelligence for Managers.
Advertisements

MRI preprocessing and segmentation.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Linear Classifiers (perceptrons)
Data Mining Classification: Alternative Techniques
Salvatore giorgi Ece 8110 machine learning 5/12/2014
Intelligent Environments1 Computer Science and Engineering University of Texas at Arlington.
A (very) brief introduction to multivoxel analysis “stuff” Jo Etzel, Social Brain Lab
Assuming normally distributed data! Naïve Bayes Classifier.
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
1 Exploiting Parameter Domain Knowledge for Learning in Bayesian Networks Thesis Committee: Tom Mitchell (Chair) John Lafferty Andrew Moore Bharat Rao.
Lesson 8: Machine Learning (and the Legionella as a case study) Biological Sequences Analysis, MTA.
1 Hidden Process Models Rebecca Hutchinson Joint work with Tom Mitchell and Indra Rustandi.
Three kinds of learning
1 Learning fMRI-Based Classifiers for Cognitive States Stefan Niculescu Carnegie Mellon University April, 2003 Our Group: Tom Mitchell, Luis Barrios, Rebecca.
1 Classifying Instantaneous Cognitive States from fMRI Data Tom Mitchell, Rebecca Hutchinson, Marcel Just, Stefan Niculescu, Francisco Pereira, Xuerui.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
Hidden Process Models Rebecca Hutchinson Tom M. Mitchell Indrayana Rustandi October 4, 2006 Women in Machine Learning Workshop Carnegie Mellon University.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Multi-voxel Pattern Analysis (MVPA) and “Mind Reading” By: James Melrose.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Evaluation of Results (classifiers, and beyond) Biplav Srivastava Sources: [Witten&Frank00] Witten, I.H. and Frank, E. Data Mining - Practical Machine.
Dimensionality Reduction for fMRI Brain Imaging Data Leman Akoglu Carnegie Mellon University, Computer Science Department Abstract Functional Magnetic.
Learning to Identify Overlapping and Hidden Cognitive Processes from fMRI Data Rebecca Hutchinson, Tom Mitchell, Indra Rustandi Carnegie Mellon University.
How To Do Multivariate Pattern Analysis
Mehdi Ghayoumi Kent State University Computer Science Department Summer 2015 Exposition on Cyber Infrastructure and Big Data.
CS-424 Gregory Dudek Today’s outline Administrative issues –Assignment deadlines: 1 day = 24 hrs (holidays are special) –The project –Assignment 3 –Midterm.
Introduction to machine learning and data mining 1 iCSC2014, Juan López González, University of Oviedo Introduction to machine learning Juan López González.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
What functional brain imaging reveals about the neuroarchitecture of object knowledge Kai-min Kevin Chang, Vicente Malave, Svetlana Shinkareva, and Marcel.
Current work at UCL & KCL. Project aim: find the network of regions associated with pleasant and unpleasant stimuli and use this information to classify.
ALIP: Automatic Linguistic Indexing of Pictures Jia Li The Pennsylvania State University.
Machine Learning for Analyzing Brain Activity Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 2006 Collaborators: Rebecca.
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
Frontiers in the Convergence of Bioscience and Information Technologies 2007 Seyed Koosha Golmohammadi, Lukasz Kurgan, Brendan Crowley, and Marek Reformat.
Ensemble Methods: Bagging and Boosting
CISC Machine Learning for Solving Systems Problems Presented by: Ashwani Rao Dept of Computer & Information Sciences University of Delaware Learning.
1 Data Analysis for fMRI Computational Analyses of Brain Imaging CALD and Psychology Tom M. Mitchell and Marcel Just January 15, 2003.
MLE’s, Bayesian Classifiers and Naïve Bayes Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 30,
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
MVPD – Multivariate pattern decoding Christian Kaul MATLAB for Cognitive Neuroscience.
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
Learning to distinguish cognitive subprocesses based on fMRI Tom M. Mitchell Center for Automated Learning and Discovery Carnegie Mellon University Collaborators:
CS378 Final Project The Netflix Data Set Class Project Ideas and Guidelines.
1 Modeling the fMRI signal via Hierarchical Clustered Hidden Process Models Stefan Niculescu, Tom Mitchell, R. Bharat Rao Siemens Medical Solutions Carnegie.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
KUHL, B.A., JOHNSON, M.K., AND CHUN, M.M. (2013). DISSOCIABLE NEURAL MECHANISMS FOR GOAL-DIRECTED VERSUS INCIDENTAL MEMORY REACTIVATION. THE JOURNAL OF.
Feasibility of Using Machine Learning Algorithms to Determine Future Price Points of Stocks By: Alexander Dumont.
Learning to Decode Cognitive States from Brain Images Tom Mitchell et al. Carnegie Mellon University Presented by Bob Stark.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Semi-Supervised Clustering
Mammogram Analysis – Tumor classification
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Table 1. Advantages and Disadvantages of Traditional DM/ML Methods
Classification of fMRI activation patterns in affective neuroscience
Self organizing networks
Machine Learning Week 1.
Capturing the Secret Dances in the Brain
Instance Based Learning
Generalization in deep learning
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
School of Computer Science, Carnegie Mellon University
Information Retrieval
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

1 Preliminary Experiments: Learning Virtual Sensors Machine learning approach: train classifiers –fMRI(t, t+  )  CognitiveState Fixed set of possible states Trained per subject, per experiment Time interval specified

2 Approach Learn fMRI(t,…,t+k)  CognitiveState Classifiers: –Gaussian Naïve Bayes, SVM, kNN Feature selection/abstraction –Select subset of voxels (by signal, by anatomy) –Select subinterval of time –Average activities over space, time –Normalize voxel activities –…–…

3 Study 1: Word Categories Family members Occupations Tools Kitchen items Dwellings Building parts 4 legged animals Fish Trees Flowers Fruits Vegetables [Francisco Pereira]

4 Word Categories Study Ten neurologically normal subjects Stimulus: –12 blocks of words: Category name (2 sec) Word (400 msec), Blank screen (1200 msec); answer … –Subject answers whether each word in category –32 words per block, nearly all in category –Category blocks interspersed with 5 fixation blocks

5 Training Classifier for Word Categories Learn fMRI(t)  word-category(t) –fMRI(t) = 8470 to 11,136 voxels, depending on subject Feature selection: Select n voxels –Best single-voxel classifiers –Strongest contrast between fixation and some word category –Strongest contrast, spread equally over ROI’s –Randomly Training method: –train ten single-subect classifiers –Gaussian Naïve Bayes  P(fMRI(t) | word-category)

6 Learned Bayes Models - Means P(BrainActivity | WordCategory = People)

7 Learned Bayes Models - Means P(BrainActivity | WordClass) Animal words People words Accuracy: 85%

8 Results Classifier outputs ranked list of classes Evaluate by the fraction of classes ranked ahead of true class –0=perfect, 0.5=random, 1.0 unbelievably poor Try abstracting 12 categories to 6 categories e.g., combine “Family Members” with “Occupations”

9 Impact of Feature Selection

10 X 1 =S 1 +N 1 X 2 =S 2 +N 2 N0N0 Goal: learn f: X ! Y or P(Y|X) Given: 1.Training examples where X i = S i + N i, signal S i ~ P(S|Y=i), noise N i ~ P noise 2.Observed noise with zero signal N 0 ~ P noise “Zero Signal” learning setting. (fixation) Class 1 observations Class 2 observations

11

12 [Haxby et al., 2001]

13 Study 1: Summary Able to classify single fMRI image by word category block Feature selection important Is classifier learning word category or something else related to time? –Accurate across ten subjects –Relevant voxels in similar locations across subjs –Locations compatible with earlier studies –New experimental data will answer definitively

14 Study 2: Pictures and Sentences Trial: read sentence, view picture, answer whether sentence describes picture Picture presented first in half of trials, sentence first in other half Image every 500 msec 12 normal subjects Three possible objects: star, dollar, plus Collected by Just et al. [Xuerui Wang and Stefan Niculescu]

15 It is true that the star is above the plus?

16

*

18

19 Is Subject Viewing Picture or Sentence? Learn fMRI(t, …, t+15)  {Picture, Sentence} –40 training trials (40 pictures and 40 sentences) –7 ROIs Training methods: –K Nearest Neighbor –Support Vector Machine –Naïve Bayes

20 Is Subject Viewing Picture or Sentence? Support Vector Machine worked better on avg. Results (leave one out) on picture-then- sentence, sentence-then-picture data –Random guess = 50% accuracy –SVM using pair of time slices at 5.0,5.5 sec after stimulus: 91% accuracy

21 Accuracy for Single-Subject Classifiers

22 Can We Train Subject-Indep Classifiers?

23 Training Cross-Subject Classifiers Approach: define supervoxels based on anatomically defined regions of interest –Abstract to seven brain region supervoxels Train on n-1 subjects, test on nth

24 Accuracy for Cross-Subject GNB Classifier

25 Accuracy for Cross-Subject and Cross-Context Classifier

26 Possible ANN to discover intermediate data abstraction across multiple subjects. Each bank of inputs corresponds to voxel inputs for a particular subject. The trained hidden layer will provide a low-dimensional data abstraction shared by all subjects. We propose to develop new algorithms to train such networks to discover multi-subject classifiers. Subject 1 Subject 2 Subject 3 Learned cross-subject representation Output classification