Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 7 Supervised Hebbian Learning.

Similar presentations


Presentation on theme: "Chapter 7 Supervised Hebbian Learning."— Presentation transcript:

1 Chapter 7 Supervised Hebbian Learning

2 Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application

3 Linear Associator Hebb’s Learning law可以用很多種網路架構,在此選用Linear Associator是為了專注於Learning law而不是網路架構 當input有很微小的變化時,Output也會跟著改變 還可以用什麼網路架構? Learn?

4 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” D. O. Hebb, 1949 B 假設神經元A的軸突(axon)足以刺激鄰近的神經元B,當持續不斷的給予刺激,因而激發了神經元之新陳代謝, 促使神經元A激發神經元B的功效增加。 如果A神經元對於刺激有了反應 B對於刺激也有反映 那麼A、B之間的Weight就會被增強 Hebb定律提出,如果神經元i距離神經元j足夠近並能夠刺激神經元 j,重複這樣的活動,則兩個神經元之間的突觸連接就會加強,神經元 j對來自神經元 i 的刺激就會格外敏感 A

5 Hebb Rule(1/2) Traning Set? Postsynaptic Signal? Presynaptic Signal?
Pjq:第q個input的第j個element aiq:第q個input的第i個element Alpha: learning rate 正整數 Hebb rule:Pj產生一個正的a  Weight增強 Wnew-Wold 和a、p之間有比例關係

6 Hebb Rule(2/2) 為什麼叫做Supervised form
f、g為 identical function for all i,j identical function:在數學裡,恆等函數為一無任何作用的函數:它總是傳回和其引數相同的值。換句話說,恆等函數為函數f(x) = x Simplified form:對Hebb rule推廣,由數學式可知a、p必須是同號數Weight才會被增強;異號數則會使w減弱 Simplified form是unsupervised(沒有target output,本章在探討supervised所以將實際輸出換成target output alpha=1

7 Batch Operation W:假設weight matrix初始為0,做完Q個Training data

8 Performance Analysis(1/2)
Input patterns are orthonormal, In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and both of unit length. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis. Pk:input Pq:training data

9 Performance Analysis(2/2)

10 Example(orthonormal)

11 Example(not orthogonal)

12 Example(not orthogonal)

13 Pseudoinverse Rule(1/3)

14 Pseudoinverse Rule(2/3)
由Linear Associator得到WP=T 若prototype input vector Pq orthonromal Wp=Tq  F(W)=0 P^-1大多都是independent R(Pq的維度)通常會大於Q(Pq的數目) 所以P不會是個方陣 所以反矩陣不存在 P:m*n If rank(P)=n left inverse 反矩陣 full rank

15 Pseudoinverse Rule(3/3)
𝑃 + is Moore-Penrose Pseudoinverse. The Pseudoinverse of a real matrix P is the uniqui matrix that satisfies P P + P=P P + P P + = P + P + P= P + P T P P + = P P + T

16 Relationship to the Hebb Rule
If P M>=N full column rank PtP is invertible

17 Relationship to the Hebb Rule

18 Example

19 Autoassociative Memory

20 Noisy Patterns (7 pixels)
Tests 50% Occluded 67% Occluded Noisy Patterns (7 pixels)

21 Variations of Hebbian Learning
Basic Rule: Learning Rate: Unsupervised: Unsupervised : 用alpha限制增加量 Smoothing:用1-gamma讓變化量降低 gamma趨近0則變成standard rule 趨近1則不會有舊的W 只會存在新的 Smoothing: Delta Rule:

22 Solved Problems

23 Solved Problems

24 Solved Problems Solution:

25 Solved Problems

26 Solved Problems

27 Solved Problems


Download ppt "Chapter 7 Supervised Hebbian Learning."

Similar presentations


Ads by Google