Download presentation
Presentation is loading. Please wait.
Published byLouise Allen Modified over 6 years ago
1
Artificial Neural Networks Group Member: Aqsa Ijaz Sehrish Iqbal Zunaira Munir
2
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number of interconnected processing units.
3
About Human Brain The human brain is composed of 100 billion nerve cells called neurons. They are connected to other thousand cells by Axons. inputs from sensory organs are accepted.These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or doesnot send it forward.
4
About Human Brain… Each neuron can connect with upto 200,000 other neurons. neurons enable us to remember, recall, think, and apply previous experiences to our every action. The power (outcome) of the human mind comes from the networks of neurons and learning.
5
Artificial Neural Networks
There are two Artificial Neural Network topologies FeedForward Feedback.
6
FeedForward ANN The information flow is unidirectional. A unit sends information to other unit from which it does not receive any information. There are no feedback loops. They are used in pattern generation/recognition/classificatio n. They have fixed inputs and outputs.
7
FeedBack ANN Here, feedback loops are allowed. They are used in content addressable memories.
8
Working of ANNs
9
Applications of Neural Networks
They can perform tasks that are easy for a human but difficult for a machine Speech − Speech recognition, speech classification, text to speech conversion. Telecommunications − Image and data compression, automated information services, real-time spoken language translation. Software − Pattern Recognition in facial recognition, optical character recognition, etc Industrial − Manufacturing process control, product design and analysis
10
Main Properties of an ANN
Parallelism Learning Storing Recalling Decision making
11
Main Properties of ANNs
Parallelism: The capability of acquiring knowledge from the environment. Learning: The capability of learning from examples and experience with or without a teacher. Learn From Experience Learn From Samples Storing The capability of storing its learnt knowledge.
12
Main Properties of ANNs…
Recalling: The capability of recalling its learnt knowledge. Decision making: The capability of making particular decisions based upon the acquired knowledge.
13
Learning paradigms There are three major learning paradigms, each corresponding to a particular abstract learning task. These are: Supervised Learning Unsupervised Learning Reinforcement Learning
14
Supervised Learning : Learn by examples as to what a face is in terms of structure, color, etc so that after several iterations it learns to define a face.
15
It involves a teacher that is scholar than the ANN itself
It involves a teacher that is scholar than the ANN itself. For example, the teacher feeds some example data about which the teacher already knows the answers. The ANN comes up with guesses while recognizing. Then the teacher provides the ANN with the answers. The network then compares it guesses with the teacher’s “correct” answers and makes adjustments according to errors.
16
SUPERVISED LEARNING Training Info = desired (target) outputs
Input output Supervised Learning System
17
Unsupervised Learning
It is required when there is no example data set with known answers.
18
Unsupervised Learning Application
19
Unsupervised Learning Application
20
Reinforcement Learning
This strategy built on observation. The ANN makes a decision by observing its environment. Reinforcement Learning allows the machine or software agent to learn its behaviour based on feedback from the environment.
21
Reinforcement Learning
Training Info = evaluations(“reward / penalties”) input output RL System
22
Supervised learning Vs RL
23
1.3 Basics of a Neuron Topology of a Neuron
24
Topology of a Neuron Neuron:
A neuron (a perceptron) is a basic processing unit to perform a small part of overall computational problem of a neural network.
25
ANN ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The neurons are connected by links and they interact with each other. The nodes can take input data and perform simple operations on the data. The result of these operations is passed to other neurons. The output at each node is called its activation or node value. Each link is associated with weight. ANNs are capable of learning, which takes place by altering weight values. The following illustration shows a simple ANN
26
Modeling Artificial Neurons
27
Example
28
Topology of a Neuron… Basic model of a neuron å ) ( j x w v
output x0 x1 xn Figure 1.8 Basic model of a neuron. w0 wn w1 Input layer Output neuron å = n i x w v ) ( j
29
There are four components of a neuron
Topology of a Neuron… There are four components of a neuron Connections Memory Buffers (register) An adder (a computing unit) An activation function
30
Components of a neuron…
Connections are directed links (shown by arrows) through which the neuron receives inputs from other neurons. Each input is scaled (scaled up or scaled down) by multiplying it with a number called weight (the connection weight). Value of a weight indicates the level/strength/ degree of influence/importance of the input to be given by the neuron. Weights are computed through a process called training
31
Components of a neuron…
An adder For computing weighted sum of inputs (also known as net input of the activation function). An activation function For transforming the output of the adder. The value resulted through this operation is termed as the output of the neuron.
32
The Perceptron Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output
34
Continued A perceptron follows the “feed-forward” model, meaning inputs are sent into the neuron, are processed, and result in an output. In the diagram above, this means the network (one neuron) reads from left to right: inputs come in, output goes out.
35
Continued Step 1: Receive inputs.
Say we have a perceptron with two inputs—let’s call them x1 and x2. Input 0: x1 = 12 Input 1: x2 = 4
36
Continued Step 2: Weight inputs.
Each input that is sent into the neuron must first be weighted, i.e. multiplied by some value (often a number between -1 and 1). When creating a perceptron, we’ll typically begin by assigning random weights. Here, let’s give the inputs the following weights: Weight 0: 0.5 Weight 1: -1
37
Continued We take each input and multiply it by its weight.
Input 0 * Weight 0 ⇒ 12 * 0.5 = 6 Input 1 * Weight 1 ⇒ 4 * -1 = -4
38
Continued Step 3: Sum inputs. The weighted inputs are then summed.
39
Continued Output = sign(sum) ⇒ sign(2) ⇒ +1 The Perceptron Algorithm:
For every input, multiply that input by its weight. Sum all of the weighted inputs. Compute the output of the perceptron based on that sum passed through an activation function (the sign of the sum).
40
Training phase To train a neural network to answer correctly, we’re going to employ the method of supervised learning. With this method, the network is provided with inputs for which there is a known answer. This way the network can find out if it has made a correct guess. If it’s incorrect, the network can learn from its mistake and adjust its weights. The process is as follows:
41
Steps Provide the perceptron with inputs for which there is a known answer. Ask the perceptron to guess an answer. Compute the error. (Did it get the answer right or wrong?) Adjust all the weights according to the error. Return to Step 1 and repeat!
42
Continued The perceptron’s error can be defined as the difference between the desired answer and its guess. ERROR = DESIRED OUTPUT - GUESS OUTPUT
43
Continued
44
Continued The error is the determining factor in how the perceptron’s weights should be adjusted. For any given weight, what we are looking to calculate is the change in weight, often called Δ weight (or “delta” weight, delta being the Greek letter Δ). NEW WEIGHT = WEIGHT + ΔWEIGHT Δ weight is calculated as the error multiplied by the input. ΔWEIGHT = ERROR * INPUT Therefore: NEW WEIGHT = WEIGHT + ERROR * INPUT
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.