Gaurav Aggarwal, Mark Shaw, Christian Wolf

Slides:



Advertisements
Similar presentations
Attention and neglect.
Advertisements

Page 1 of 50 Optimization of Artificial Neural Networks in Remote Sensing Data Analysis Tiegeng Ren Dept. of Natural Resource Science in URI (401)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
A saliency map model explains the effects of random variations along irrelevant dimensions in texture segmentation and visual search Li Zhaoping, University.
Some concepts from Cognitive Psychology to review: Shadowing Visual Search Cue-target Paradigm Hint: you’ll find these in Chapter 12.
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Simple Neural Nets For Pattern Classification
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Multi-voxel Pattern Analysis (MVPA) and “Mind Reading” By: James Melrose.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Experimental Design Tali Sharot & Christian Kaul With slides taken from presentations by: Tor Wager Christian Ruff.
-Gaurav Mishra -Pulkit Agrawal. How do neurons work Stimuli  Neurons respond (Excite/Inhibit) ‘Electrical Signals’ called Spikes Spikes encode information.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Unsupervised Learning
CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
MVPD – Multivariate pattern decoding Christian Kaul MATLAB for Cognitive Neuroscience.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Neural decoding of Visual Imagery during sleep PRESENTED BY: Sandesh Chopade Kaviti Sai Saurab T. Horikawa, M. Tamaki et al.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Vision-inspired classification
Neural Network Architecture Session 2
an introduction to: Deep Learning
Data Mining, Neural Network and Genetic Programming
Representational Similarity Analysis
Article Review Todd Hricik.
fMRI and neural encoding models: Voxel receptive fields (continued)
Real Neurons Cell structures Cell body Dendrites Axon
Multi-Voxel Pattern Analyses MVPA
Deep Belief Networks Psychology 209 February 22, 2013.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSSE463: Image Recognition Day 17
Volume 60, Issue 5, Pages (December 2008)
Deep learning Introduction Classes of Deep Learning Networks
Volume 56, Issue 2, Pages (October 2007)
Neuro-Computing Lecture 4 Radial Basis Function Network
network of simple neuron-like computing elements
Basics of Deep Learning No Math Required
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Vision: In the Brain of the Beholder
Advances in fMRI Real-Time Neurofeedback
CSSE463: Image Recognition Day 17
Example of a simple deep network architecture.
CSSE463: Image Recognition Day 17
Hongbo Yu, Brandon J. Farley, Dezhe Z. Jin, Mriganka Sur  Neuron 
Yukiyasu Kamitani, Frank Tong  Current Biology 
The Functional Neuroanatomy of Object Agnosia: A Case Study
Contextual Feedback to Superficial Layers of V1
Parietal and Frontal Cortex Encode Stimulus-Specific Mnemonic Representations during Visual Working Memory  Edward F. Ester, Thomas C. Sprague, John T.
CSSE463: Image Recognition Day 17
Integration of Local Features into Global Shapes
CSSE463: Image Recognition Day 17
Advances in fMRI Real-Time Neurofeedback
ARTIFICIAL NEURAL networks.
The Network Approach: Mind as a Web
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Deep Neural Networks as Scientific Models
Volume 56, Issue 2, Pages (October 2007)
Example of a simple deep network architecture.
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Gaurav Aggarwal, Mark Shaw, Christian Wolf Image Reconstruction from Human Brain Activity Gaurav Aggarwal, Mark Shaw, Christian Wolf

Retinotopy Fovea Visual Field Visual Cortex Neurons which are close together in visual cortex respond to the same location in the visual field. This creates a retinotopic map on the cortex, with locations in the center of the visual field have an inflated representation on the brain like a blown up balloon. 07.12.2018 CoSMo 2013

Classification Yoichi Miyawaki (2008) In 2008 Miyawaki and colleagues made headlines and the front page of Neuron when they used the retinotopy of the visual cortex and activity in an fMRI scanner to reconstruct the images that people saw from their brain activity. Yoichi Miyawaki (2008) 07.12.2018 CoSMo 2013

Procedure Miyawaki et al. (2008) Train on Random Images (fMRI) 10 x 10 black-white Pixels Reconstruct Target Images from V1,V2,V3 activity Their experiment consisted of two phases, in the first a random array of pixels which were white and black on a 10 by 10 grid were shown. Afterwards brain activity here is used to train classifiers. Then actual objects are shown on the same grid, and brain activity here is used only to test the classifier’s accuracy. Here you can see the reconstruction letter by letter of the word neuron. 07.12.2018 CoSMo 2013

How?

How? 07.12.2018 CoSMo 2013

1*1 Decoder Basis 1 1 1 1 1 1 1*2 Decoder Basis 1 1 1 1 2*1 Decoder 1 1 1*2 Decoder Basis 1 1 1 1 2*1 Decoder Basis Most Trivial way to classify is to consider each brain voxel individually Can a certain pattern of brain activity be indicitive of whether a pixel was black or white? 1 1 2*2 Decoder Basis

Participants shown 10*10 grid of pixels, obtain an fMRI signal Can a certain pattern of brain activity be indicitive of whether a pixel was black or white? Decoder tries to see how certain groups of voxels correlate with pixel 1*1: 1*2: picks up vertical contrast 2*1: picks up horizontal contrast 2*2: corners/edges 07.12.2018 CoSMo 2013

Goals of the 2 week project Data Analysis Decode left/right hemisphere Improve decoding accuracy 2 part project Data analysis Inspiration: Retinotopically, all visual info goes to contralateral side of head. We thought at larger visual receptive fields (higher visual cortex), we might pick up some of the ipsilateral receptive field

Spiking Neural Networks Information is encoded in spike timing We talk about firing rates of neurons, measured over an interval Do the individual spikes carry information Looking for new classification methodologies that weren’t previously tried

Learning Rule Remote Supervised Learning (ReSuMe) Spike Time Dependent Plasticity (STDP) Hebbian learning rule

Neural Network Architecture Time Encoding 1497 neurons Input (1467), hidden, output(1) Feed-forward connections Tested on 2 layers since we have a linear problem 3 layers: ability to learn non-linearities Normalize voxel data, round to nearest tenth 0 0.2 0.4 0.6 0.8 1 Voxel Data 50 us 70 us 90 us 110 us 130 us 150 us Time Value

Reconstructions: All decoders, V1,V2 Stimulus 1x1, V1 1x1, V1-LH 1x1, V1-RH 07.12.2018 CoSMo 2013

Accuracy of the 1x1 Decoder (V1) % Correct Classified V1 – 1x1 decoder MSE = 0.25 07.12.2018 CoSMo 2013

Left vs. Right Hemisphere – V3 V3 – Right HS V3 – Left HS MSE = 0.44 MSE = 0.46 Compare Image Reconstructions after training the algorithm on one hemisphere only. Assumption: RH -> better reconstruction of left imagepart and vice versa. Better at the center (assumably fovea). No difference in left/right Low accuracy expected, because neurons represent larger areas of the visual field. 07.12.2018 CoSMo 2013

Left vs. Right Hemisphere – V2 V2 – Left HS V2 – Right HS MSE = 0.39 MSE = 0.42 If we go down in the visual pathway 07.12.2018 CoSMo 2013

Left vs. Right Hemisphere – V1 V1 – Left HS V1 – Right HS MSE = 0.29 MSE = 0.22 After training on the left hemisphere, the right image is as well better reconstructed and vice versa. But now we also find well reconstructed pixels from ipsilateral activity. We discussed several possibilites why this might be the case: Lack of control of visual gaze during the experiment - Other low level features that are coded in V1 such as orientation Voxel Correlations (MVPA) … but we couldnt come up with a final explanation. … and didnt have the time to come up with further analysis. 07.12.2018 CoSMo 2013

Future Directions Using trained Weights of the right hemisphere on voxeldata of the left hemisphere Different decoders Horikawa et al. (2013): Neural decoding of visual imagery during sleep. If we had more time we would have liked to used the trained weights of one hemisphere on the voxeldata of the other hemisphere (The scientific question: how symmetric are retinotopic maps) 07.12.2018 CoSMo 2013

07.12.2018 CoSMo 2013

References Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M., Morito, Y., Tanabe, H. C., Sadato, N., Kamitani, Y. ``Visual image reconstruction from human brain activity using a combination of multi-scale local image decoders,'' Neuron, 60, 915-929 (2008). http://www.youtube.com/watch?v=daY7uO0eftA Horikawa, T., Tamaki, M., Miyawaki, Y., Kamitani, Y. ``Neural decoding of visual imagery during sleep,'' Science, 340, pp.639 - 642 (2013). 07.12.2018 CoSMo 2013