J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Bioinspired Computing Lecture 16
Pattern Association.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Introduction to Neural Networks Computing
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Machine Learning Neural Networks
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Neural Networks Chapter 2 Joost N. Kok Universiteit Leiden.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
Radial-Basis Function Networks
Lecture 12 Self-organizing maps of Kohonen RBF-networks
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Artificial Neural Networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Associative-Memory Networks Input: Pattern (often noisy/corrupted) Output: Corresponding pattern (complete / relatively noise-free) Process 1.Load input.
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
Intrusion Detection Using Hybrid Neural Networks Vishal Sevani ( )
10/6/20151 III. Recurrent Neural Networks. 10/6/20152 A. The Hopfield Network.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Artificial Neural Network Unsupervised Learning
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
Fuzzy BSB-neuro-model. «Brain-State-in-a-Box Model» (BSB-model) Dynamic of BSB-model: (1) Activation function: (2) 2.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Chapter 6 Neural Network.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Artificial Neural Networks An Introduction. Outline Introduction Biological and artificial neurons Perceptrons (problems) Backpropagation network Training.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Real Neurons Cell structures Cell body Dendrites Axon
Counter propagation network (CPN) (§ 5.3)
Corso su Sistemi complessi:
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Unsupervised Networks Closely related to clustering
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation Algorithm –Architecture - incremental building of the net Hopfield Networks –Recurrent networks, Associative memory –Hebb learning rule –Energy function and capacity of the Hopfield network –Applications Self-Organising Networks –Spatial representation of data used to code the information –Unsupervised learning –Kohonen Self-Organising Maps –Applications

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Cascade Nets and Cascade-Correlation Algorithm Starts with input- and output-layer of neurons and build a hierarchy of hidden units –Feed-forward network - n input, m output, h hidden units Perceptrons in the hidden layer are ordered - lateral connections –inputs from the input layer and from all antecedent hidden units –i-th unit has n + (i-1) inputs Output units are connected to all input and hidden units

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Cascade Nets: Topology output (y) hidden (z) input (x) Active mode hidden perceptrons:, for i=1…h output units:, for i=1,…,m

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Cascade-Correlation Algorithm Start with a minimal configuration of the network (h = 0) Repeat until satisfied –Initialise a set of candidates for a new hidden unit i.e. connect them to the input units –Adapt their weights in order to maximise the correlation between their outputs and the error of the network –Choose the best candidate and connect him to the outputs –Adapt weights of output perceptrons

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Remarks on Cascade-Correlation Algorithm Greedy learning mechanism Incremental constructive learning algorithm –easy to learn additional examples Typically faster than backpropagation –one layer of weights is optimised in each step (linear complexity) Easy to parallelise the process of maximisation of the correlation

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Associative Memory Problem: –Store a set of p patterns –When given a new pattern, the network returns one of the stored patterns that most closely resembles the new one –To be insensitive to small errors in the input pattern Content-addressable memory - an index key for searching the memory is a portion of the searched information –autoassociative - refinement of the input information (B&W picture  colours) –heteroassociative - evocation of associated information (friend’s picture  name)

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Hopfield Model Auto-associative memory Topology - cyclic network with completely interconnected n neurons –  1, …,  n  Z - internal potentials –y 1, …, y n  {-1,1} - bipolar outputs –w ji  Z - connection from i-th to j-th neuron –w jj = 0 (j = 1, …, n)

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Adaptation According to Hebb Rule Hebb Rule - synaptic strengths in the brain change in response to experience Changes are proportional to the correlation between the firing of the pre- and post-synaptic neurons. Technically: –training set: T = {x k | x k = (x k1, …, x kn )  {-1,1} n, k = 1, …, p} 1. Start with w ji = 0 (j = 1, …, n; i = 1, …, n) 2. For the given training set do 1  j  i  n

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Remarks on Hebb Rule Training examples are represented in the net through the relations between neurons’ states Symmetric network: w ji = w ij Adaptation can be represented as voting of examples about the weights: x kj = x ki (YES) vs. x kj  x ki (NO) –sign of the weight –absolute value of the weight

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Active Mode of Hopfield Network 1.Set y i = x i (i = 1, …, n) 2.Go through all neurons and at each time step select one neuron j to be updated according the following rule: –compute its internal potential: –set its new state: 3.If not stable configuration then go to step 2 else end - output of the net is determined by the state of neurons.

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Energy Function and Energy Landscape Energy function: Energy landscape: –high energy - unstable states –low energy - more stable states –energy always decreases (or remain constant) as the system evolves according to its dynamical rule

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Energy Landscape Local minima of the energy function represent stored examples - attractors Basins of attraction - catchment areas around each minimum False local optima - phantoms

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Storage Capacity of Hopfield Network Random patterns with equal probability. P error - probability, that any chosen bit is unstable –depends on the number of units n and the number of patterns p Capacity of the network - maximum number of patterns that can be stored without unacceptable errors. Results: p  0.138n - training examples as local minima of E(y) p < 0.05n - training examples as global minima of E(y), deeper minima than those corresponding to phantoms Example: 10 training examples, 200 neurons  weights

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Hopfield Network: Example Pattern recognition: –8 examples, matrix 12  10 pixels  120 neurons –input pattern with 25% wrong bits

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Selforganisation Unsupervised learning –a network must discover for itself patterns, features, regularities, or categories in the input data and code them in the output Units and connections must display some degree of selforganisation Competitive learning –output units compete for being excited –only one output unit is on at a time (winner-takes-all mechanism) Feature mapping –development of significant spatial organisation in the output layer Applications: –function approximation, image processing, statistical analysis –combinatorial optimisation

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Selforganising Network Goal is to approximate the probability distribution of real-valued input vectors with a finite set of units Given the training set T of training examples x  R n and a number of representatives h Network topology: Weights belonging to one output unit determine its position in the input space Lateral inhibitions

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Selforganising Network and Kohonen Learning Principal: Go through the training set and for each example select the winner output neuron j and modify its weights as follows w ji = w ji +  (x i - w ji ) where real parameter 0<  <1 determines the scale of changes –winner neuron is shifted towards the current input in order to improve its relative position k-means clustering

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Kohonen Selforganising Maps Topology - as in the previous case –no lateral connections –output units formed in a structure defining neighbourhood –one- or two-dimensional array of units

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Kohonen Selforganising Maps Neighbourhood of the output neuron c N s (c) = {j; d( j,c)  s} defines a set of neurons whose distance from c is less than s. Learning algorithm: –weight update rule involves neighbourhood relations –weights of the winner as well as the units close to him are changed according to w ji = w ji + h c (j)(x i - w ji )j  N s (c) where or Gaussian function –closer units are more affected than those further away

J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Kohonen Maps: Examples