J. ElderPSYC 6256 Principles of Neural Coding ASSIGNMENT 3: HEBBIAN LEARNING.

Slides:



Advertisements
Similar presentations
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Advertisements

Pattern recognition Professor Aly A. Farag
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
Region labelling Giving a region a name. Image Processing and Computer Vision: 62 Introduction Region detection isolated regions Region description properties.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter.
Greg GrudicIntro AI1 Introduction to Artificial Intelligence CSCI 3202: The Perceptron Algorithm Greg Grudic.
Cliff Rhyne and Jerry Fu June 5, 2007 Parallel Image Segmenter CSE 262 Spring 2007 Project Final Presentation.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #31 4/17/02 Neural Networks.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Neural Optimization of Evolutionary Algorithm Strategy Parameters Hiral Patel.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Analysis of Simulation Results Andy Wang CIS Computer Systems Performance Analysis.
Unsupervised learning
Biologically-Inspired Neural Nets Modeling the Hippocampus.
J. ElderPSYC 6256 Principles of Neural Coding ASSIGNMENT 2: REVERSE CORRELATION.
1 Naïve Bayes Models for Probability Estimation Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
The BCM theory of synaptic plasticity.
Data Mining Techniques in Stock Market Prediction
Plasticity and learning Dayan and Abbot Chapter 8.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Algorithms and Algorithm Analysis The “fun” stuff.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Simulation of the matrix Bingham-von Mises- Fisher distribution, with applications to multivariate and relational data Discussion led by Chunping Wang.
Multiple parallel hidden layers and other improvements to recurrent neural network language modeling ICASSP 2013 Diamantino Caseiro, Andrej Ljolje AT&T.
1 Week 2: Variables and Assignment Statements READING: 1.4 – 1.6 EECS Introduction to Computing for the Physical Sciences.
Unsupervised learning
Instance Construction via Likelihood- Based Data Squashing Madigan D., Madigan D., et. al. (Ch 12, Instance selection and Construction for Data Mining.
The BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to It was.
Modeling Neural Networks Christopher Krycho Advisor: Dr. Eric Abraham May 14, 2009.
Adaptive Algorithms for PCA PART – II. Oja’s rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can.
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
Contents PCA GHA APEX Kernel PCA CS 476: Networks of Neural Computation, CSD, UOC, 2009 Conclusions WK9 – Principle Component Analysis CS 476: Networks.
Homework #2: Functions and Arrays By J. H. Wang Mar. 20, 2012.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Adaptive Cooperative Systems Chapter 8 Synaptic Plasticity 8.11 ~ 8.13 Summary by Byoung-Hee Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul.
Analysis of spectro-temporal receptive fields in an auditory neural network Madhav Nandipati.
Project 1: Classification Using Neural Networks Kim, Kwonill Biointelligence laboratory Artificial Intelligence.
A field of study that encompasses computational techniques for performing tasks that require intelligence when performed by humans. Simulation of human.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Independent Component Analysis features of Color & Stereo images Authors: Patrik O. Hoyer Aapo Hyvarinen CIS 526: Neural Computation Presented by: Ajay.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Assignment 4: Deep Convolutional Neural Networks
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Classification with Perceptrons Reading:
CSSE463: Image Recognition Day 17
CSE P573 Applications of Artificial Intelligence Neural Networks
A principled way to principal components analysis
Covariation Learning and Auto-Associative Memory
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
CSSE463: Image Recognition Day 17
Brendan K. Murphy, Kenneth D. Miller  Neuron 
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 13
Artificial neurons Nisheeth 10th January 2019.
CSSE463: Image Recognition Day 17
Automatic Handwriting Generation
Supervised Hebbian Learning
Probability: What are the chances?
Random Neural Network Texture Model
Artificial Neural Network learning
Presentation transcript:

J. ElderPSYC 6256 Principles of Neural Coding ASSIGNMENT 3: HEBBIAN LEARNING

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 2 Outline  The assignment requires you to  Download the code and data  Run the code in its default configuration  Experiment with altering parameters of the model  Optional (for bonus marks): Modify the model, for example to: Incorporate other forms of synaptic normalization Incorporate other learning rules (e.g., ICA)

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 3 Dataset  I will provide you with 15 natural images of foliage from the McGill Calibrated Image dataset  These images have been prefiltered, using a DOG model of primate LGN selectivity (Hawken & Parker 1991)  Thus the input simulates the feedforward input to V1

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 4 Submission Details  You will submit a short lab report on your experiments.  For each experiment, the report will include:  Any code you may have developed  The parameter values you tested  The graphs you produced  The observations you made  The conclusions you drew

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 5 Discussion Questions  In particular, I want your reports to answer at least the following questions:  Why does Hebbian learning yield the dominant eigenvector of the stimulus autocorrelation matrix?  What is the Oja rule and why is it needed?  How does adding recurrence create diversity?  What is the effect of varying the learning time constants?

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 6 Graphs  The graphs you produce should be as similar as possible to mine.  Make sure everything is intelligible!

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 7 Due Date  The report is due Wed Apr 13

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 8 Eigenvectors function lgneig(lgnims,neigs,nit) %Computes and plots first neigs eigenimages of LGN inputs to V1 %lgnims = cell array of images representing normalized LGN output %nit = number of image patches on which to base estimate dx=1.5; %pixel size in arcmin. This is arbitrary. v1rad=round(10/dx); %V1 cell radius (pixels) Nu=(2*v1rad+1)^2; %Number of input units nim=length(lgnims); Q=zeros(Nu); for i=1:nit u=im(y-v1rad:y+v1rad,x-v1rad:x+v1rad); u=u(:); Q=Q+u*u'; %Form autocorrelation matrix end Q=Q/Nu; %normalize [v,d]=eigs(Q,neigs); %compute eigenvectors

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 9 Output

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 10 Hebbian Learning (Feedforward) function hebb(lgnims,nv1cells,nit) %Implements a version of Hebbian learning with the Oja rule, running on simulated LGN %inputs from natural images. %lgnims = cell array of images representing normalized LGN output %nv1cells = number of V1 cells to simulate %nit = number of learning iterations dx=1.5; %pixel size in arcmin. This is arbitrary. v1rad=round(60/dx); %V1 cell radius Nu=(2*v1rad+1)^2; %Number of input units tauw=1e+6; %learning time constant nim=length(lgnims); w=normrnd(0,1/Nu,nv1cells,Nu); %random initial weights for i=1:nit u=im(y-v1rad:y+v1rad,x-v1rad:x+v1rad); u=u(:); %See Dayan Section 8.2 v=w*u; %Output %update feedforward weights using Hebbian learning with Oja rule w=w+(1/tauw)*(v*u'-repmat(v.^2,1,Nu).*w); end

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 11 Output

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 12 Hebbian Learning (With Recurrence) function hebbfoldiak(lgnims,nv1cells,nit) %Implements a version of Foldiak's 1989 network, running on simulated LGN %inputs from natural images. Incorporates feedforward Hebbian learning and %recurrent inhibitory anti-Hebbian learning. %lgnims = cell array of images representing normalized LGN output %nv1cells = number of V1 cells to simulate %nit = number of learning iterations dx=1.5; %pixel size in arcmin. This is arbitrary. v1rad=round(60/dx); %V1 cell radius (pixels) Nu=(2*v1rad+1)^2; %Number of input units tauw=1e+6; %feedforward learning time constant taum=1e+6; %recurrent learning time constant zdiag=(1-eye(nv1cells)); %All 1s but 0 on the diagonal w=normrnd(0,1/Nu,nv1cells,Nu); %random initial feedforward weights m=zeros(nv1cells); for i=1:nit u=im(y-v1rad:y+v1rad,x-v1rad:x+v1rad); u=u(:); %See Dayan pp , and Foldiak 1989 k=inv(eye(nv1cells)-m); v=k*w*u; %steady-state output for this input %update feedforward weights using Hebbian learning with Oja rule w=w+(1/tauw)*(v*u'-repmat(v.^2,1,Nu).*w); %update inhibitory recurrent weights using anti-Hebbian learning m=min(0,m+zdiag.*((1/taum)*(-v*v'))); end

Probability & Bayesian Inference J. ElderPSYC 6256 Principles of Neural Coding 13 Output