Presentation is loading. Please wait.

Presentation is loading. Please wait.

Soft Competitive Learning without Fixed Network Dimensionality Jacob Chakareski and Sergey Makarov Rice University, Worcester Polytechnic Institute.

Similar presentations


Presentation on theme: "Soft Competitive Learning without Fixed Network Dimensionality Jacob Chakareski and Sergey Makarov Rice University, Worcester Polytechnic Institute."— Presentation transcript:

1 Soft Competitive Learning without Fixed Network Dimensionality Jacob Chakareski and Sergey Makarov Rice University, Worcester Polytechnic Institute

2 Algorithms Neural Gas Competitive Hebbian Learning Neural Gas + Competitive Hebbian Learning Growing Neural Gas

3 Neural Gas Sorts the network units based on their distance from the input signal Adapts a certain number of units, based on this “rank order” The number of adapted units and the adaptation strength are decreased according to a fixed schedule

4 The algorithm Initialize a set A with N units c i Sort the network units Adapt the network units

5 Simulation Results

6 Competitive Hebbian Learning Usually not used on its own, but in conjunction with other methods It does not change reference vectors w j at all It only generates a number of neighborhood edges between the units of the network

7 The algorithm Initialize a set A with N units c i and the connection set C Determine units s 1 and s 2 Create a connection between s 1 and s 2

8 Simulation Results

9 Neural Gas + CHL A superposition of NG and CHL Sometimes denoted as “topology- representing networks” A local edge aging mechanism implemented to remove edges which are not valid anymore

10 The algorithm Set the age of the connection between s 1 and s 2 to zero (“refresh” the edge) Increment the age of all edges emanating from s 1 Remove edges with an age larger than the current age T(t)

11 Simulation Results

12 Growing Neural Gas Number of units changes (mostly increases) during the self-organization process Starting with very few units new units are added successively Local error measures are gathered to determine where to insert new units Each new unit is inserted near the unit with the largest accumulated error

13 The algorithm Add the squared distance between the input signal and the winner to a local error variable Adapt the winner and its neighbors If the number of input signals generated so far is a multiple integer of a parameter, insert a new unit :

14  Determine the unit with the max Err  Determine the neighbor of q with the max Err  Add a new unit r to the network  Insert edges connecting r with q and f, and remove the original edge between q and f  Decrease the error variables of q and f

15  Interpolate the error variable of r from q and f Decrease the error variables of all units

16 Simulation Results

17 Applications: Web/Database Maps


Download ppt "Soft Competitive Learning without Fixed Network Dimensionality Jacob Chakareski and Sergey Makarov Rice University, Worcester Polytechnic Institute."

Similar presentations


Ads by Google