Presentation is loading. Please wait.

Presentation is loading. Please wait.

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.

Similar presentations


Presentation on theme: "J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation."— Presentation transcript:

1 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation Algorithm –Architecture - incremental building of the net Hopfield Networks –Recurrent networks, Associative memory –Hebb learning rule –Energy function and capacity of the Hopfield network –Applications Self-Organising Networks –Spatial representation of data used to code the information –Unsupervised learning –Kohonen Self-Organising Maps –Applications

2 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Cascade Nets and Cascade-Correlation Algorithm Starts with input- and output-layer of neurons and build a hierarchy of hidden units –Feed-forward network - n input, m output, h hidden units Perceptrons in the hidden layer are ordered - lateral connections –inputs from the input layer and from all antecedent hidden units –i-th unit has n + (i-1) inputs Output units are connected to all input and hidden units

3 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Cascade Nets: Topology output (y) hidden (z) input (x) Active mode hidden perceptrons:, for i=1…h output units:, for i=1,…,m

4 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Cascade-Correlation Algorithm Start with a minimal configuration of the network (h = 0) Repeat until satisfied –Initialise a set of candidates for a new hidden unit i.e. connect them to the input units –Adapt their weights in order to maximise the correlation between their outputs and the error of the network –Choose the best candidate and connect him to the outputs –Adapt weights of output perceptrons

5 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Remarks on Cascade-Correlation Algorithm Greedy learning mechanism Incremental constructive learning algorithm –easy to learn additional examples Typically faster than backpropagation –one layer of weights is optimised in each step (linear complexity) Easy to parallelise the process of maximisation of the correlation

6 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Associative Memory Problem: –Store a set of p patterns –When given a new pattern, the network returns one of the stored patterns that most closely resembles the new one –To be insensitive to small errors in the input pattern Content-addressable memory - an index key for searching the memory is a portion of the searched information –autoassociative - refinement of the input information (B&W picture  colours) –heteroassociative - evocation of associated information (friend’s picture  name)

7 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Hopfield Model Auto-associative memory Topology - cyclic network with completely interconnected n neurons –  1, …,  n  Z - internal potentials –y 1, …, y n  {-1,1} - bipolar outputs –w ji  Z - connection from i-th to j-th neuron –w jj = 0 (j = 1, …, n)

8 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Adaptation According to Hebb Rule Hebb Rule - synaptic strengths in the brain change in response to experience Changes are proportional to the correlation between the firing of the pre- and post-synaptic neurons. Technically: –training set: T = {x k | x k = (x k1, …, x kn )  {-1,1} n, k = 1, …, p} 1. Start with w ji = 0 (j = 1, …, n; i = 1, …, n) 2. For the given training set do 1  j  i  n

9 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Remarks on Hebb Rule Training examples are represented in the net through the relations between neurons’ states Symmetric network: w ji = w ij Adaptation can be represented as voting of examples about the weights: x kj = x ki (YES) vs. x kj  x ki (NO) –sign of the weight –absolute value of the weight

10 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Active Mode of Hopfield Network 1.Set y i = x i (i = 1, …, n) 2.Go through all neurons and at each time step select one neuron j to be updated according the following rule: –compute its internal potential: –set its new state: 3.If not stable configuration then go to step 2 else end - output of the net is determined by the state of neurons.

11 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Energy Function and Energy Landscape Energy function: Energy landscape: –high energy - unstable states –low energy - more stable states –energy always decreases (or remain constant) as the system evolves according to its dynamical rule

12 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Energy Landscape Local minima of the energy function represent stored examples - attractors Basins of attraction - catchment areas around each minimum False local optima - phantoms

13 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Storage Capacity of Hopfield Network Random patterns with equal probability. P error - probability, that any chosen bit is unstable –depends on the number of units n and the number of patterns p Capacity of the network - maximum number of patterns that can be stored without unacceptable errors. Results: p  0.138n - training examples as local minima of E(y) p < 0.05n - training examples as global minima of E(y), deeper minima than those corresponding to phantoms Example: 10 training examples, 200 neurons  40000 weights

14 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Hopfield Network: Example Pattern recognition: –8 examples, matrix 12  10 pixels  120 neurons –input pattern with 25% wrong bits

15 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Selforganisation Unsupervised learning –a network must discover for itself patterns, features, regularities, or categories in the input data and code them in the output Units and connections must display some degree of selforganisation Competitive learning –output units compete for being excited –only one output unit is on at a time (winner-takes-all mechanism) Feature mapping –development of significant spatial organisation in the output layer Applications: –function approximation, image processing, statistical analysis –combinatorial optimisation

16 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Selforganising Network Goal is to approximate the probability distribution of real-valued input vectors with a finite set of units Given the training set T of training examples x  R n and a number of representatives h Network topology: Weights belonging to one output unit determine its position in the input space Lateral inhibitions

17 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Selforganising Network and Kohonen Learning Principal: Go through the training set and for each example select the winner output neuron j and modify its weights as follows w ji = w ji +  (x i - w ji ) where real parameter 0<  <1 determines the scale of changes –winner neuron is shifted towards the current input in order to improve its relative position k-means clustering

18 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Kohonen Selforganising Maps Topology - as in the previous case –no lateral connections –output units formed in a structure defining neighbourhood –one- or two-dimensional array of units

19 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Kohonen Selforganising Maps Neighbourhood of the output neuron c N s (c) = {j; d( j,c)  s} defines a set of neurons whose distance from c is less than s. Learning algorithm: –weight update rule involves neighbourhood relations –weights of the winner as well as the units close to him are changed according to w ji = w ji + h c (j)(x i - w ji )j  N s (c) where or Gaussian function –closer units are more affected than those further away

20 J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Kohonen Maps: Examples


Download ppt "J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation."

Similar presentations


Ads by Google