Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Representative Reading Group. Research Highlights 1Support vector machines can robustly decode semantic information from EEG and MEG 2Multivariate.

Similar presentations


Presentation on theme: "Distributed Representative Reading Group. Research Highlights 1Support vector machines can robustly decode semantic information from EEG and MEG 2Multivariate."— Presentation transcript:

1 Distributed Representative Reading Group

2 Research Highlights 1Support vector machines can robustly decode semantic information from EEG and MEG 2Multivariate decoding techniques allow for detection of subtle, but distributed, effects 3Semantic categories and individual words have distributed spatiotemporal representations 4Representations are consistent between subjects and stimulus modalities 5A scalable hierarchical tree decoder further improves decoding performance

3 why do reported results vary from study to study,? Partly due to the statistical analysis (traditional univariate techniques ) of high-dimensional neuroimaging data require correction for multiple comparisons to control for false positives insensitive to subtle, but widespread, effects within the brain yield differing results depending on the specific responses elicited by the particular experiment performed

4 Why chose SVM Robust to high-dimensional data attempt to find a separating boundary which maximizes the margin between these classes reduces over-fitting and allows for good generalization when classifying novel data allows for a multivariate examination of the spatiotemporal dynamics

5 Why hierarchical tree decoding Single multiclass decoder can distinguish individual word representations well, it doesn’t, directly incorporate a prior knowledge about semantic classes and the features which best discriminate these categories. To combine information from the classifier models generated to decode semantic category and individual words, a hierarchical tree framework which attempts to decode word properties sequentially were implemented Given an unknown word, the tree decoder First classifies it as either a large (target) or small (nontarget) object Second classified as living or nonliving object Finally as an individual word within the predicted semantic category Advantages: allows the appropriate features to be used to decode each word property, narrowing the search space before individual words are decoded. such a tree construct is easily scalable and could allow for the eventual decoding of larger libraries of words.

6 Experiment visual (SV) and auditory version (SA) language tasks Task: Subjects were instructed to press a button if the presented word represented an object larger than 1 foot in any dimension Stimuli: representing objects larger than 1 foot : smaller than 1 foot = 1:1 living objects (animals and animal parts) and nonliving objects (man-made items)= 1:1 How to present stimuli: Half of the trials presented a novel word which was only shown only once during the experiment while the other half of the trials presented 1 of 10 repeated words (each shown multiple times during the experiment).

7 Decoding framework Features: Average amplitude in six 50-ms time windows were sampled from every channel and concatenated into a large feature vector for each trial Decoding living versus non living 200, 300, 400, 500, 600, and 700 ms poststimulus Decoding individual words 250, 300, 350, 400, 450, and 500 ms poststimulus

8 Decoding accuracy Compared to Naive Bayes classifier, SVM is better able to handle high dimension data

9 SVM weights show weights show important times and locations for decoding

10 Decoding is not based on low-level stimulus properties It is possible that the generated classifiers are utilizing neural activity related to low-level visual or auditory stimulus properties when decoding individual words To test this, we performed a shuffling based on stimulus properties to evaluate this potential confounding factor.

11 inter-modality and inter-subject decoding show shared neural representations inter-modality: train the classifier with one modality and test the classifier with the other modality inter-subject : classifier was trained on data from all but one subject within a single modality and then the remaining subject was used as test data.

12 Hierarchical tree decoding improves decoding performance A three-level hierarchical tree decoder was utilized to first decode the large/small distinction (utilizing amplitude and spectral features), then the living/nonliving object category (utilizing 200–700 ms amplitude features), and finally the individual word (utilizing 250–500 ms amplitude features).

13 Thanks for your attention!


Download ppt "Distributed Representative Reading Group. Research Highlights 1Support vector machines can robustly decode semantic information from EEG and MEG 2Multivariate."

Similar presentations


Ads by Google