Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.

Slides:



Advertisements
Similar presentations
Neural Network Toolbox COMM2M Harry R. Erwin, PhD University of Sunderland.
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Kostas Kontogiannis E&CE
Artificial Neural Networks
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
Neural Networks.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Biological inspiration Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
How does the mind process all the information it receives?
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Artificial Neural Networks -Application- Peter Andras
Artificial Neural Networks - Introduction -
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks - Introduction -. Overview 1.Biological inspiration 2.Artificial neurons and neural networks 3.Application.
Radial Basis Function (RBF) Networks
Radial-Basis Function Networks
Radial Basis Function Networks
Artificial Neural Networks and applications
Radial Basis Function Networks
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Statistical Classification Methods 1.Introduction 2.k-nearest neighbor 3.Neural networks 4.Decision trees 5.Support Vector Machine.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Semiconductors, BP&A Planning,
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Artificial neural networks
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Artificial Neural Networks An Introduction. Outline Introduction Biological and artificial neurons Perceptrons (problems) Backpropagation network Training.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
An Introduction To The Backpropagation Algorithm.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Machine Learning Supervised Learning Classification and Regression
Learning with Perceptrons and Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
Artificial Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Chapter 3. Artificial Neural Networks - Introduction -
Synaptic DynamicsII : Supervised Learning
Presentation transcript:

Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu

Overview 1.Biological inspiration 2.Artificial neurons and neural networks 3.Learning processes 4.Learning with artificial neural networks Source:

Biological inspiration Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours. An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems. The nervous system is build by relatively simple units, the neurons, so copying their behavior and functionality should be the solution.

Biological inspiration Dendrites Soma (cell body) Axon

Biological inspiration synapses axon dendrites The information transmission happens at the synapses. Source:

Biological inspiration The spikes travelling along the axon of the pre-synaptic neuron trigger the release of neurotransmitter substances at the synapse. The neurotransmitters cause excitation or inhibition in the dendrite of the post-synaptic neuron. The integration of the excitatory and inhibitory signals may produce spikes in the post-synaptic neuron. The contribution of the signals depends on the strength of the synaptic connection. Source:

Artificial neurons Neurons work by processing information. They receive and provide information in form of spikes. The McCullogh-Pitts model Inputs Output w2w2 w1w1 w3w3 wnwn w n-1... x 1 x 2 x 3 … x n-1 x n y Source:

Artificial neurons The McCullogh-Pitts model: spikes are interpreted as spike rates; synaptic strength are translated as synaptic weights; excitation means positive product between the incoming spike rate and the corresponding synaptic weight; inhibition means negative product between the incoming spike rate and the corresponding synaptic weight; Source:

Artificial neurons Nonlinear generalization of the McCullogh-Pitts neuron: y is the neuron’s output, x is the vector of inputs, and w is the vector of synaptic weights. Examples: sigmoidal neuron Gaussian neuron

Artificial neural networks Inputs Output An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs.

Artificial neural networks Tasks to be solved by artificial neural networks: controlling the movements of a robot based on self- perception and other information (e.g., visual information); deciding the category of potential food items (e.g., edible or non-edible) in an artificial world; recognizing a visual object (e.g., a familiar face); predicting where a moving object goes, when a robot wants to catch it. Source:

Learning in biological systems Learning = learning by adaptation The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior. At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones.

Learning as optimisation The objective of adapting the responses on the basis of the information received from the environment is to achieve a better state. E.g., the animal likes to eat many energy rich, juicy fruits that make its stomach full, and makes it feel happy. In other words, the objective of learning in biological organisms is to optimise the amount of available resources, happiness, or in general to achieve a closer to optimal state. Source:

Learning in biological neural networks The learning rules of Hebb: synchronous activation increases the synaptic strength; asynchronous activation decreases the synaptic strength. These rules fit with energy minimization principles. Maintaining synaptic strength needs energy, it should be maintained at those places where it is needed, and it shouldn’t be maintained at places where it’s not needed. Source:

Learning principle for artificial neural networks ENERGY MINIMIZATION We need an appropriate definition of energy for artificial neural networks, and having that we can use mathematical optimisation techniques to find how to change the weights of the synaptic connections between neurons. ENERGY = measure of task performance error

Neural network mathematics Inputs Output

Neural network mathematics Neural network: input / output transformation W is the matrix of all weight vectors. Source:

MLP neural networks MLP = multi-layer perceptron Perceptron: MLP neural network: xy out x

RBF neural networks RBF = radial basis function Example:Gaussian RBF x y out

Neural network tasks control classification prediction approximation These can be reformulated in general as FUNCTION APPROXIMATION tasks. Approximation: given a set of values of a function g(x) build a neural network that approximates the g(x) values for any input x. Source:

Neural network approximation Task specification: Data: set of value pairs: (x t, y t ), y t =g(x t ) + z t ; z t is random measurement noise. Objective: find a neural network that represents the input / output transformation (a function) F(x,W) such that F(x,W) approximates g(x) for every x Source:

Learning to approximate Error measure: Rule for changing the synaptic weights: c is the learning parameter (usually a constant)

Learning with a perceptron Perceptron: Data: Error: Learning: A perceptron is able to learn a linear function.

Learning with RBF neural networks RBF neural network: Data: Error: Learning: Only the synaptic weights of the output neuron are modified. An RBF neural network learns a nonlinear function.

Learning with MLP neural networks MLP neural network: with p layers Data: Error: It is very complicated to calculate the weight changes. x y out 1 2 … p-1 p

Learning with backpropagation Solution of the complicated learning: calculate first the changes for the synaptic weights of the output neuron; calculate the changes backward starting from layer p-1, and propagate backward the local error terms. The method is still relatively complicated but it is much simpler than the original optimisation problem. Source:

Learning with general optimisation In general it is enough to have a single layer of nonlinear neurons in a neural network in order to learn to approximate a nonlinear function. In such case general optimisation may be applied without too much difficulty. Example: an MLP neural network with a single hidden layer:

Learning with general optimisation Synaptic weight change rules for the output neuron: Synaptic weight change rules for the neurons of the hidden layer:

New methods for learning with neural networks Bayesian learning: the distribution of the neural network parameters is learnt Support vector learning: the minimal representative subset of the available data is used to calculate the synaptic weights of the neurons

Summary Artificial neural networks are inspired by the learning processes that take place in biological systems. Artificial neurons and neural networks try to imitate the working mechanisms of their biological counterparts. Learning can be perceived as an optimisation process. Biological neural learning happens by the modification of the synaptic strength. Artificial neural networks learn in the same way. The synapse strength modification rules for artificial neural networks can be derived by applying mathematical optimisation methods.

Summary Learning tasks of artificial neural networks can be reformulated as function approximation tasks. Neural networks can be considered as nonlinear function approximating tools (i.e., linear combinations of nonlinear basis functions), where the parameters of the networks should be found by applying optimisation methods. The optimisation is done with respect to the approximation error measure. In general it is enough to have a single hidden layer neural network (MLP, RBF or other) to learn the approximation of a nonlinear function. In such cases general optimisation can be applied to find the change rules for the synaptic weights.