Artificial Neural Networks Torsten Reil

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Bioinspired Computing Lecture 16
NEURAL NETWORKS Biological analogy
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Lecture 14 – Neural Networks
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
COGNITIVE NEUROSCIENCE
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
Artificial Neural Networks (ANNs)
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Multi-Layer Perceptrons Michael J. Watts
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Networks An Overview and Analysis.
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 13: Artificial Neural Networks – Introduction, Feedforward Neural Networks Dr. Itamar.
Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Neural Networks Laboratory Technological Educational Institute.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Introduction to Neural Networks And Their Applications.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Neural Network and Deep Learning 王强昌 MLA lab.
Artificial Neural Networks. 2 Outline What are Neural Networks? Biological Neural Networks ANN – The basics Feed forward net Training Example – Voice.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Chapter 6 Neural Network.
Neural Network Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Neural Networks Lecture 4 out of 4. Practical Considerations Input Architecture Output.
Artificial Neural Networks An Introduction. Outline Introduction Biological and artificial neurons Perceptrons (problems) Backpropagation network Training.
Neural networks.
Real Neurons Cell structures Cell body Dendrites Axon
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Example: Voice Recognition
XOR problem Input 2 Input 1
Artificial Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Introduction To Neural Networks
Comp3710 Artificial Intelligence Thompson Rivers University
Artificial Neural Networks
ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 14: Artificial Neural Networks – Introduction, Feedforward Neural Networks October.
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
The Network Approach: Mind as a Web
PYTHON Deep Learning Prof. Muhammad Saeed.
CSC 578 Neural Networks and Deep Learning
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Artificial Neural Networks Torsten Reil

Outline What are Neural Networks? Biological Neural Networks ANN – The basics Feed forward net Training Example – Voice recognition Applications – Feed forward nets Recurrency Elman nets Hopfield nets Central Pattern Generators Conclusion

What are Neural Networks? Models of the brain and nervous system Highly parallel –Process information much more like the brain than a serial computer Learning Very simple principles Very complex behaviours Applications –As powerful problem solvers –As biological models

Biological Neural Nets Pigeons as art experts (Watanabe et al. 1995) –Experiment: Pigeon in Skinner box Present paintings of two different artists (e.g. Chagall / Van Gogh) Reward for pecking when presented a particular artist (e.g. Van Gogh)

Pigeons were able to discriminate between Van Gogh and Chagall with 95% accuracy (when presented with pictures they had been trained on) Discrimination still 85% successful for previously unseen paintings of the artists Pigeons do not simply memorise the pictures They can extract and recognise patterns (the ‘style’) They generalise from the already seen to make predictions This is what neural networks (biological and artificial) are good at (unlike conventional computer)

ANNs – The basics ANNs incorporate the two fundamental components of biological neural nets: 1. Neurones (nodes) 2. Synapses (weights)

Neurone vs. Node

Structure of a node: Squashing function limits node output:

Synapse vs. weight

Feed-forward nets Information flow is unidirectional Data is presented to Input layer Passed on to Hidden Layer Passed on to Output layer Information is distributed Information processing is parallel Internal representation (interpretation) of data

Feeding data through the net: (1  0.25) + (0.5  (-1.5)) = (-0.75) = Squashing:

Data is presented to the network in the form of activations in the input layer Examples –Pixel intensity (for pictures) –Molecule concentrations (for artificial nose) –Share prices (for stock market prediction) Data usually requires preprocessing –Analogous to senses in biology How to represent more abstract data, e.g. a name? –Choose a pattern, e.g for “Chris” for “Becky”

Weight settings determine the behaviour of a network  How can we find the right weights?

Training the Network - Learning Backpropagation –Requires training set (input / output pairs) –Starts with small random weights –Error is used to adjust weights (supervised learning)  Gradient descent on error landscape

Advantages –It works! –Relatively fast Downsides –Requires a training set –Can be slow –Probably not biologically realistic Alternatives to Backpropagation –Hebbian learning Not successful in feed-forward nets –Reinforcement learning Only limited success –Artificial evolution More general, but can be even slower than backprop

Example: Voice Recognition Task: Learn to discriminate between two different voices saying “Hello” Data –Sources Steve Simpson David Raubenheimer –Format Frequency distribution (60 bins) Analogy: cochlea

Network architecture –Feed forward network 60 input (one for each frequency bin) 6 hidden 2 output (0-1 for “Steve”, 1-0 for “David”)

Presenting the data Steve David

Presenting the data (untrained network) Steve David

Calculate error Steve David 0.43 – 0 = –1= – 1= – 0= 0.55

Backprop error and adjust weights Steve David 0.43 – 0 = – 1= – 1= – 0=

Repeat process (sweep) for all training pairs –Present data –Calculate error –Backpropagate error –Adjust weights Repeat process multiple times

Presenting the data (trained network) Steve David

Results – Voice Recognition –Performance of trained network Discrimination accuracy between known “Hello”s –100% Discrimination accuracy between new “Hello”’s –100% Demo

Results – Voice Recognition (ctnd.) –Network has learnt to generalise from original data –Networks with different weight settings can have same functionality –Trained networks ‘concentrate’ on lower frequencies –Network is robust against non-functioning nodes

Applications of Feed-forward nets –Pattern recognition Character recognition Face Recognition –Sonar mine/rock recognition (Gorman & Sejnowksi, 1988) –Navigation of a car (Pomerleau, 1989) –Stock-market prediction –Pronunciation (NETtalk) (Sejnowksi & Rosenberg, 1987)

Cluster analysis of hidden layer

FFNs as Biological Modelling Tools Signalling / Sexual Selection –Enquist & Arak (1994) Preference for symmetry not selection for ‘good genes’, but instead arises through the need to recognise objects irrespective of their orientation –Johnstone (1994) Exaggerated, symmetric ornaments facilitate mate recognition (but see Dawkins & Guilford, 1995)

Recurrent Networks Feed forward networks: –Information only flows one way –One input pattern produces one output –No sense of time (or memory of previous state) Recurrency –Nodes connect back to other nodes or themselves –Information flow is multidirectional –Sense of time and memory of previous state(s) Biological nervous systems show high levels of recurrency (but feed-forward structures exists too)

Elman Nets Elman nets are feed forward networks with partial recurrency Unlike feed forward nets, Elman nets have a memory or sense of time

Classic experiment on language acquisition and processing (Elman, 1990) Task – Elman net to predict successive words in sentences. Data –Suite of sentences, e.g. “The boy catches the ball.” “The girl eats an apple.” –Words are input one at a time Representation –Binary representation for each word, e.g for “girl” Training method –Backpropagation

Internal representation of words

Hopfield Networks Sub-type of recurrent neural nets –Fully recurrent –Weights are symmetric –Nodes can only be on or off –Random updating Learning: Hebb rule (cells that fire together wire together) –Biological equivalent to LTP and LTD Can recall a memory, if presented with a corrupt or incomplete version  auto-associative or content-addressable memory

Task:store images with resolution of 20x20 pixels  Hopfield net with 400 nodes Memorise: 1.Present image 2.Apply Hebb rule (cells that fire together, wire together) Increase weight between two nodes if both have same activity, otherwise decrease 3.Go to 1 Recall: 1.Present incomplete pattern 2.Pick random node, update 3.Go to 2 until settled DEMO

Memories are attractors in state space

Catastrophic forgetting Problem: memorising new patterns corrupts the memory of older ones  Old memories cannot be recalled, or spurious memories arise Solution: allow Hopfield net to sleep

Two approaches (both using randomness): –Unlearning (Hopfield, 1986) Recall old memories by random stimulation, but use an inverse Hebb rule  ‘Makes room’ for new memories (basins of attraction shrink) –Pseudorehearsal (Robins, 1995) While learning new memories, recall old memories by random stimulation Use standard Hebb rule on new and old memories  Restructure memory Needs short-term + long term memory Mammals: hippocampus plays back new memories to neo- cortex, which is randomly stimulated at the same time

RNNs as Central Pattern Generators CPGs: group of neurones creating rhythmic muscle activity for locomotion, heart-beat etc. Identified in several invertebrates and vertebrates Hard to study  Computer modelling –E.g. lamprey swimming (Ijspeert et al., 1998)

Evolution of Bipedal Walking (Reil & Husbands, 2001)

CPG cycles are cyclic attractors in state space

Recap – Neural Networks Components – biological plausibility –Neurone / node –Synapse / weight Feed forward networks –Unidirectional flow of information –Good at extracting patterns, generalisation and prediction –Distributed representation of data –Parallel processing of data –Training: Backpropagation –Not exact models, but good at demonstrating principles Recurrent networks –Multidirectional flow of information –Memory / sense of time –Complex temporal dynamics (e.g. CPGs) –Various training methods (Hebbian, evolution) –Often better biological models than FFNs

Online material: