Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unsupervised learning

Similar presentations


Presentation on theme: "Unsupervised learning"— Presentation transcript:

1 Unsupervised learning
The Hebb rule – Neurons that fire together wire together. PCA RF development with PCA

2 Classical Conditioning and Hebb’s rule
Ear A Nose B Tongue The fundamental question that draws most of my scientific attention is how experience shapes the brain. During this talks I will speak about two central concepts in Neuroscience receptive field plasticity and synaptic plasticity. Synaptic plasticity is the change in synaptic efficacy that occurs due to pre and postsynaptic activity. One of the first people to suggest that synaptic plasticity serves as the basis for learning is the Canadian Psychologist Donald Hebb who said Is his famous book in 1949 the following sentence … In this slide I will try to illustrate with a simple example how synaptic plasticity is connected to behavior. Today there is strong evidence for the existence of synaptic plasticity and it is considered the major candidate for The basis of learning memory and some aspects of development. Therefore it is very important to know What are the rules that govern synaptic plasticity. The central theme in my work and throughout this talk is to Find out what these rules are. I think it is imperative that such work would involve a combination of theoretical and Experimental work. “When an axon in cell A is near enough to excite cell B and repeatedly and persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficacy in firing B is increased” D. O. Hebb (1949)

3 The generalized Hebb rule:
where xi are the inputs and y the output is assumed linear: Results in 2D

4 Example of Hebb in 2D w (Note: here inputs have a mean of zero)

5 On the board: Solve simple linear first order ODE Fixed points and their stability for non linear ODE.

6 In the simplest case, the change in synaptic weight w is:
where x are input vectors and y is the neural response. Assume for simplicity a linear neuron: So we get: Now take an average with respect to the distribution of inputs, get:

7 (in matrix notation) If <x>=0 , Q is the covariance function.
If a small change Δw occurs over a short time Δt then: (in matrix notation) If <x>=0 , Q is the covariance function. What is then the solution of this simple first order linear ODE ? (Show on board)

8 Mathematics of the generalized Hebb rule
The change in synaptic weight w is: where x are input vectors and y is the neural response. Assume for simplicity a linear neuron: So we get:

9 Taking an average of the the distribution of inputs
And using and We obtain

10 In matrix form Where J is a matrix of ones, e is a unit vector in direction (1,1,1 … 1), and or Where

11 The equation therefore has the form
If k1 is not zero, this has a fixed point, however it is usually not stable. If k1=0 then have:

12 The Hebb rule is unstable – how can it be stabilized while preserving its properties?
The stabilized Hebb (Oja) rule. Normalize Where |w(t)|=1 Appoximate to first order in Δw: (show on board) Now insert Get:

13 } y Therefore The Oja rule therefore has the form:

14 Average In matrix form:

15 Using this rule the weight vector converges to
the eigen-vector of Q with the highest eigen-value. It is often called a principal component or PCA rule. The exact dynamics of the Oja rule have been solved by Wyatt and Elfaldel 1995 Variants of networks that extract several principal components have been proposed (e.g: Sanger 1989)

16 Therefore a stabilized Hebb (Oja neuron) carries out Eigen-vector, or principal component analysis (PCA).

17 Using this rule the weight vector converges to
the eigen-vector of Q with the highest eigen-value. It is often called a principal component or PCA rule. Another way to look at this: Where for the Oja rule: At the f.p: where So the f.p is an eigen-vector of Q. The condition means that w is normalized. Why? Could there be other choices for β?

18 Show that the Oja rule converges to the state |w^2|=1
The Oja rule in matrix form: What is Bonus question for H.W: The equivalence above, why does it prove convergence to normalization.

19 Show that the f.p of the Oja rule is such that the largest eigen-vector with the largest eigen-value (PC) is stable while others are not (from HKP – pg 202). Start with: Assume w=ua+εub where ua and ub are eigen-vectors with eigen-values λa,b

20 Get: (show on board) Therefore: That is stable only when λa> λb for every b.

21 Finding multiple principal components – the Sanger algorithm.
Subtract projection onto accounted for subspace (Grahm-Schimdtt) Standard Oja Normalization

22 Homework 1: (due in 1/28) Implement a simple Hebb neuron with random 2D input, tilted at an angle, θ=30o with variances 1 and 3 and mean 0. Show the synaptic weight evolution. (200 patterns at least) 1b) Calculate the correlation matrix of the input data. Find the eigen-values, eigen-vectors of this matrix. Compare to 1a. 1c) Repeat 1a for an Oja neuron, compare to 1b. + bonus question above (another 25 pt)

23 What did we learn up to here?

24 Visual Pathway Area 17 Retina light electrical signals Visual Cortex
Receptive fields are: Binocular Orientation Selective Area 17 LGN Receptive fields are: I have chosen to use the visual cortex as a model system. It is a good system since there is a lot of experimental data about the VC plasticity and because it is easy to directly control the inputs to the visual cortex. Now Describe visual pathway Stress monocular LGN with no orientation selectivity + radially symmetric. Monocular Radially Symmetric Retina light electrical signals

25 Right Left Right Left Tuning curves Response (spikes/sec) 180 360 90
180 360 90 270 Response (spikes/sec) Here give song and Dance

26 Orientation Selectivity
Binocular Deprivation Normal Adult Response (spikes/sec) Response (spikes/sec) Adult It has been established that the maturation of orientation selectivity is experience dependent. In cats at birth some cells show broadly tuned orientation selectivity. As the animal matures in a Natural environment it’s cells become more orientation selective (Show images). If an animal is deprived of A patterned environment it will not develop orientation selectivity and even loose whatever orientation selectivity It had at eye opening. angle angle Eye-opening Eye-opening

27 Monocular Deprivation
Normal Left Right Right Response (spikes/sec) Left angle angle Cells in visual cortex show varying degrees of ocular dominance. Cells can be classified by their degree of ocular dominance. Point to OD and explain. If an animal is monocularly deprived by lid suture it alters the OD histogram – as seen in first slide etc. Show OD histogram. 20 30 % of cells 15 10 Rittenhouse et. al. group group

28 First use Hebb/PCA with toy examples
then used with more realistic examples

29 Aim get selective neurons using a Hebb/PCA rule
Simple example: r r r r

30 Why? The eigen-value equation has the form: Q can be rewritten in the equivalent form: And a possible solution can be written as the sum:

31 Inserting, and by orthogonality get:
So for l=0, λ=2, and for l=1, λ=q, for l>1 there is no solution. So either w(r)= const with λ=2 or with λ=q.

32 Orientation selectivity from a natural environment:
The Images:

33

34 Natural Images, Noise, and Learning
retinal activity image present patches update weights Retina Patches from retinal activity image LGN How can we figure out if this rule is reasonable – this usually requires simulations. In order to compare a theory to a realistic experimental condition the assumptions (both explicit and implicit) of the theory need To be realistic enough. Many models in the past have assumed simplified visual environments. One of the our contributions to the field was preforming simulations and analysis based of natural images. I will now describe how such simulations are run -> Run through pictures. Patches from noise Cortex

35 Raw images: (fig 5.8)

36 Preprocessed images: (fig 5.9)

37 Monocular Deprivation
Normal Left Right Right Response (spikes/sec) Left angle angle Cells in visual cortex show varying degrees of ocular dominance. Cells can be classified by their degree of ocular dominance. Point to OD and explain. If an animal is monocularly deprived by lid suture it alters the OD histogram – as seen in first slide etc. Show OD histogram. 20 30 % of cells 15 10 Rittenhouse et. al. group group

38 Binocularity – simple examples.
Q is a 2-eye correlation function. What is the solution of the eigen-value equation:

39 In a higher dimensional case:
Qll, Qlr etc. are now matrixes. And Qlr=Qrl. The eigen-vectors now have the form

40 In a simpler case This implies Qll=Qrr, that is eyes are equivalent.
And the cross eye correlation is a scaled version of the one eye correlation. If: then: with

41 Positive correlations (η=0.2)
Hebb with lower saturation at 0 Negative correlations (η=-0.2)

42 Lets now assume that Q is as above for the 1D
selectivity example.

43

44 With 2D space included

45 2 partially overlapping eyes using natural images

46 Orientation selectivity and Ocular Dominance
PCA Left Eye Right Eye Right Synapses Left Synapses Left Right 50 100 Can PCA neurons exhibit both Orientation selectivity and varying degrees of OD? Explain model on left then results on the right – observation RF’s are always binocular And have certain symmetry to them. Can prove it must be so by symmetry arguments. No. of Cells Bin

47 What did we learned today?

48 The Hebb rule is unstable – how can it be stabilized while preserving its properties?
The stabilized Hebb (Oja) rule. This is has a fixed point at: Where: The only stable fixed point is for λmax


Download ppt "Unsupervised learning"

Similar presentations


Ads by Google