Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unsupervised learning

Similar presentations


Presentation on theme: "Unsupervised learning"— Presentation transcript:

1 Unsupervised learning
The Hebb rule – Neurons that fire together wire together. PCA RF development with PCA

2 Classical Conditioning
The fundamental question that draws most of my scientific attention is how experience shapes the brain. During this talks I will speak about two central concepts in Neuroscience receptive field plasticity and synaptic plasticity. Synaptic plasticity is the change in synaptic efficacy that occurs due to pre and postsynaptic activity. One of the first people to suggest that synaptic plasticity serves as the basis for learning is the Canadian Psychologist Donald Hebb who said Is his famous book in 1949 the following sentence … In this slide I will try to illustrate with a simple example how synaptic plasticity is connected to behavior. Today there is strong evidence for the existence of synaptic plasticity and it is considered the major candidate for The basis of learning memory and some aspects of development. Therefore it is very important to know What are the rules that govern synaptic plasticity. The central theme in my work and throughout this talk is to Find out what these rules are. I think it is imperative that such work would involve a combination of theoretical and Experimental work.

3 Classical Conditioning and Hebb’s rule
Ear A Nose B Tongue The fundamental question that draws most of my scientific attention is how experience shapes the brain. During this talks I will speak about two central concepts in Neuroscience receptive field plasticity and synaptic plasticity. Synaptic plasticity is the change in synaptic efficacy that occurs due to pre and postsynaptic activity. One of the first people to suggest that synaptic plasticity serves as the basis for learning is the Canadian Psychologist Donald Hebb who said Is his famous book in 1949 the following sentence … In this slide I will try to illustrate with a simple example how synaptic plasticity is connected to behavior. Today there is strong evidence for the existence of synaptic plasticity and it is considered the major candidate for The basis of learning memory and some aspects of development. Therefore it is very important to know What are the rules that govern synaptic plasticity. The central theme in my work and throughout this talk is to Find out what these rules are. I think it is imperative that such work would involve a combination of theoretical and Experimental work. “When an axon in cell A is near enough to excite cell B and repeatedly and persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficacy in firing B is increased” D. O. Hebb (1949)

4 The generalized Hebb rule:
where xi are the inputs and y the output is assumed linear: Results in 2D

5 Example of Hebb in 2D w (Note: here inputs have a mean of zero)

6 On the board: Solve simple linear first order ODE
Fixed points and their stability for non linear ODE. Describe potential and gradient descent

7 In the simplest case, the change in synaptic weight w is:
where x are input vectors and y is the neural response. Assume for simplicity a linear neuron: So we get: Now take an average with respect to the distribution of inputs, get:

8 (in matrix notation) If <x>=0 , Q is the covariance function.
If a small change Δw occurs over a short time Δt then: (in matrix notation) If <x>=0 , Q is the covariance function. What is then the solution of this simple first order linear ODE ? (Show on board)

9 Mathematics of the generalized Hebb rule
The change in synaptic weight w is: where x are input vectors and y is the neural response. Assume for simplicity a linear neuron: So we get:

10 Taking an average of the distribution of inputs
And using and We obtain

11 In matrix form Where J is a matrix of ones, e is a vector in direction (1,1,1 … 1), and or Where

12 The equation therefore has the form
If k1 is not zero, this has a fixed point, however it is usually not stable. If k1=0 then have:

13 The Hebb rule is unstable – how can it be stabilized while preserving its properties?
The stabilized Hebb (Oja) rule. Where: Appoximate to first order in η: Now insert Get: Normalize

14 } y Therefore The Oja rule therefore has the form:

15 Average In matrix form:

16 Using this rule the weight vector converges to
the eigen-vector of Q with the highest eigen-value. It is often called a principal component or PCA rule. The exact dynamics of the Oja rule have been solved by Wyatt and Elfaldel 1995 Variants of networks that extract several principal components have been proposed (e.g: Sanger 1989)

17 Therefore a stabilized Hebb (Oja neuron) carries out Eigen-vector, or principal component analysis (PCA).

18 Using this rule the weight vector converges to
the eigen-vector of Q with the highest eigen-value. It is often called a principal component or PCA rule. Another way to look at this: Where for the Oja rule: At the f.p: where So the f.p is an eigen-vector of Q. The condition means that w is normalized. Why? Could there be other choices for β?

19 Show that the Oja rule converges to the state |w^2|=1
The Oja rule in matrix form: Multiply by w, get Bonus question for H.W: The equivalence above That the direction of the PC is the direction of maximal variance (HKP – pg 202)

20 Show that the f.p of the Oja rule is such that the largest eigen-vector with the largest eigen-value (PC) is stable while others are not (from HKP – pg 202). Start with: Assume w=ua+εub where ua and ub are eigen-vectors with eigen-values λa,b

21 Get: (show on board) Therefore: That is stable only when λa> λb for every b.


Download ppt "Unsupervised learning"

Similar presentations


Ads by Google