Dr. Unnikrishnan P.C. Professor, EEE

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
EE 690 Design of Embodied Intelligence
Fuzzy Inference Systems. Review Fuzzy Models If then.

 Negnevitsky, Pearson Education, Lecture 5 Fuzzy expert systems: Fuzzy inference n Mamdani fuzzy inference n Sugeno fuzzy inference n Case study.
AI – CS364 Fuzzy Logic Fuzzy Logic 3 03 rd October 2006 Dr Bogdan L. Vrusias
Financial Informatics –XVI: Supervised Backpropagation Learning
Computer Intelligence and Soft Computing
Simple Neural Nets For Pattern Classification
The back-propagation training algorithm
Neuro-Fuzzy Control Adriano Joaquim de Oliveira Cruz NCE/UFRJ
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Fuzzy Rule-based Models *Neuro-fuzzy and Soft Computing - J.Jang, C. Sun, and, E. Mizutani, Prentice Hall 1997.
Hybrid intelligent systems:
Neuro-fuzzy Systems Xinbo Gao School of Electronic Engineering Xidian University 2004,10.
Hybrid intelligent systems:
Artificial Neural Networks
Extraction of Fetal Electrocardiogram Using Adaptive Neuro-Fuzzy Inference Systems Khaled Assaleh, Senior Member,IEEE M97G0224 黃阡.
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
© Negnevitsky, Pearson Education, Lecture 11 Neural expert systems and neuro-fuzzy systems Introduction Introduction Neural expert systems Neural.
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Non-Bayes classifiers. Linear discriminants, neural networks.
Fuzzy Systems Michael J. Watts
ANFIS (Adaptive Network Fuzzy Inference system)
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
ADALINE (ADAptive LInear NEuron) Network and
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
EEE502 Pattern Recognition
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Authors : Chun-Tang Chao, Chi-Jo Wang,
Neural Networks 2nd Edition Simon Haykin
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
TEMPLATE DESIGN © Classification of Magnetic Resonance Brain Images Using Feature Extraction and Adaptive Neuro-Fuzzy.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Neural networks.
The Gradient Descent Algorithm
Fuzzy Systems Michael J. Watts
FUZZY NEURAL NETWORKS TECHNIQUES AND THEIR APPLICATIONS
第 3 章 神经网络.
Fuzzy expert systems Fuzzy inference Mamdani fuzzy inference
A Simple Artificial Neuron
CSE 473 Introduction to Artificial Intelligence Neural Networks
Derivation of a Learning Rule for Perceptrons
Fuzzy logic Introduction 3 Fuzzy Inference Aleksandar Rakić
Dr. Unnikrishnan P.C. Professor, EEE
Dr. Unnikrishnan P.C. Professor, EEE
Dr. Unnikrishnan P.C. Professor, EEE
Neuro-Computing Lecture 4 Radial Basis Function Network
of the Artificial Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks
Neural Network - 2 Mayank Vatsa
Chapter 9: Supervised Learning Neural Networks
Lecture 5 Fuzzy expert systems: Fuzzy inference
Artificial Intelligence Chapter 3 Neural Networks
Backpropagation.
Dr. Unnikrishnan P.C. Professor, EEE
Artificial Intelligence Chapter 3 Neural Networks
Parametric Methods Berlin Chen, 2005 References:
Back Propagation and Representation in PDP Networks
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
Dr. Unnikrishnan P.C. Professor, EEE
Artificial Intelligence Chapter 3 Neural Networks
Dr. Unnikrishnan P.C. Professor, EEE
Hybrid intelligent systems:
Backpropagation.
Presentation transcript:

Dr. Unnikrishnan P.C. Professor, EEE EE368 Soft Computing Dr. Unnikrishnan P.C. Professor, EEE

Module III Hybrid Intelligent Systems ANFIS

Hybrid Intelligent Systems: Neural expert systems and Neuro-fuzzy systems Introduction  Neural expert systems  Fuzzy expert Systems  Neuro-Fuzzy systems  ANFIS: Adaptive Neuro-Fuzzy Inference System

ANFIS Architecture

ANFIS: Adaptive Neuro-Fuzzy Inference System Am are fuzzy sets. The Sugeno fuzzy model for generating fuzzy rules from a given input-output data set. A typical Sugeno fuzzy rule is expressed in the following form: IF x1 is A1 AND x2 is A2 . . . . . AND xm is Am THEN y = f (x1, x2, . . . , xm) where x1, x2, . . . , xm are input variables; A1, A2, . . . , Am are fuzzy sets.

When y is a constant, we obtain a zero-order Sugeno fuzzy model in which the consequent of a rule is specified by a singleton. When y is a first-order polynomial, i.e. y = k0 + k1 x1 + k2 x2 + . . . + km xm we obtain a first-order Sugeno fuzzy model.

Adaptive Neuro-Fuzzy Inference System Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Layer 6 x1 x2 A1 1 N1 1 x1 A2 2 2 N2 y  B1 3 N3 3 x2 B2 4 4 N4

Layer 1 is the input layer Layer 1 is the input layer. Neurons in this layer simply pass external crisp signals to Layer 2. Layer 2 is the fuzzification layer. Neurons in this layer perform fuzzification. In Jang’s model, fuzzification neurons have a bell activation function.

Layer 3 is the rule layer. Each neuron in this layer corresponds to a single Sugeno-type fuzzy rule. A rule neuron receives inputs from the respective fuzzification neurons and calculates the firing strength of the rule it represents. In an ANFIS, the conjunction of the rule antecedents is evaluated by the operator product. Thus, the output of neuron i in Layer 3 is obtained as, where the value of 1 represents the firing strength, or the truth value, of Rule 1.

Layer 4 is the normalisation layer Layer 4 is the normalisation layer. Each neuron in this layer receives inputs from all neurons in the rule layer, and calculates the normalised firing strength of a given rule. The normalised firing strength is the ratio of the firing strength of a given rule to the sum of firing strengths of all rules. It represents the contribution of a given rule to the final result. Thus, the output of neuron i in Layer 4 is determined as, x(4) i  j j 1

Layer 5 is the defuzzification layer Layer 5 is the defuzzification layer. Each neuron in this layer is connected to the respective normalisation neuron, and also receives initial inputs, x1 and x2. A defuzzification neuron calculates the weighted consequent value of a given rule as, i i defuzzification neuron i in Layer 5, and ki0, ki1 and ki2 is a set of consequent parameters of rule i.

Layer 6 is represented by a single summation neuron Layer 6 is represented by a single summation neuron. This neuron calculates the sum of outputs of all defuzzification neurons and produces the overall ANFIS output, y,  i1 (6) y  x  k  k x1 k i

Can an ANFIS deal with problems where we do not have any prior knowledge of the rule consequent parameters? It is not necessary to have any prior knowledge of rule consequent parameters. An ANFIS learns these parameters and tunes membership functions.

Learning in the ANFIS model An ANFIS uses a hybrid learning algorithm that combines the least-squares estimator and the gradient descent method. In the ANFIS training algorithm, each epoch is composed from a forward pass and a backward pass. In the forward pass, a training set of input patterns (an input vector) is presented to the ANFIS, neuron outputs are calculated on the layer- by-layer basis, and rule consequent parameters are identified.

 yd (2)  (2) f(2)  (2) f(2)    n(2) fn(2) The rule consequent parameters are identified by the least-squares estimator. In the Sugeno-style fuzzy inference, an output, y, is a linear function. Thus, given the values of the membership parameters and a training set of P input-output patterns, we can form P linear equations in terms of the consequent parameters as: yd (1)  (1) f(1)  (1) f(1)    n(1) fn(1)   yd (2)  (2) f(2)  (2) f(2)    n(2) fn(2)       d   

k11 k12 … k1m k20 k21 k22 … k2m … kn0 kn1 yd = A k, In the matrix notation, we have yd = A k, where yd is a P  1 desired output vector, yd (1) (1)x(1)  (1)xm(1)  n(1)  n (1)x(1)   n (1)xm(1)  (1)x(2)  (2)xm(2)  n(2) (1)     yd (2) (2)  n (2)x(2)   n (2)xm(2)               yd   A    yd (p)  (p) (p)x(p)  (p)xm(p)  n(p)   n (p)x(p)   n (p)xm(p)       n(P)  n (P)x(P)  n (P)xm(P)             (P) (P)x(P) (P)xm(P) yd (P) and k is an n (1 + m)  1 vector of unknown consequent parameters, k = [k10 kn2 knm k11 k12 … k1m k20 k21 k22 … k2m … kn0 kn1

As soon as the rule consequent parameters are established, we compute an actual network output vector, y, and determine the error vector, e e = yd  y In the backward pass, the back-propagation algorithm is applied. The error signals are propagated back, and the antecedent parameters are updated according to the chain rule.

In the ANFIS training algorithm suggested by Jang, both antecedent parameters and consequent parameters are optimised. In the forward pass, the consequent parameters are adjusted while the antecedent parameters remain fixed. In the backward pass, the antecedent parameters are tuned while the consequent parameters are kept fixed.

Function approximation using the ANFIS model In this example, an ANFIS is used to follow a trajectory of the non-linear function defined by the equation y  cos(2 x1) ex2 First, we choose an appropriate architecture for the ANFIS. An ANFIS must have two inputs – x1 and x2 – and one output – y. Thus, in our example, the ANFIS is defined by four rules, and has the structure shown below.

An ANFIS model with four rules Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Layer 6 x1 x2 A1 1 N1 1 x1 A2 2 2 N2 y  B1 3 N3 3 x2 B2 4 4 N4

The ANFIS training data includes 101 training samples The ANFIS training data includes 101 training samples. They are represented by a 101  3 matrix [x1 x2 yd], where x1 and x2 are input vectors, and yd is a desired output vector. The first input vector, x1, starts at 0, increments by 0.1 and ends at 10. The second input vector, x2, is created by taking sin from each element of vector x1, with elements of the desired output vector, yd, determined by the function equation.

Learning in an ANFIS with two membership functions assigned to each input (one epoch) y Training Data ANFIS Output 2 1 -1 -2 -3 0.5 10 0 6 8 -0.5 x2 4 2 -1 0 x1

Learning in an ANFIS with two membership functions assigned to each input (100 epochs) y Training Data ANFIS Output 2 1 -1 -2 -3 0.5 10 0 6 8 -0.5 x2 4 2 -1 x1

We can achieve some improvement, but much better results are obtained when we assign three membership functions to each input variable. In this case, the ANFIS model will have nine rules, as shown in figure below.

An ANFIS model with nine rules x1 x2 y

Learning in an ANFIS with three membership functions assigned to each input (one epoch) y Training Data ANFIS Output 2 1 -1 -2 -3 1 0.5 10 0 6 8 -0.5 x2 4 2 -1 x1

Learning in an ANFIS with three membership functions assigned to each input (100 epochs) y Training Data ANFIS Output 2 1 -1 -2 -3 0.5 10 0 6 8 -0.5 x2 4 2 -1 x1

Initial and final membership functions of the ANFIS 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 -1 -0.8 -0.6 -0.4 -0.2 0 1 2 3 4 5 6 7 8 9 10 0.2 0.4 0.6 0.8 1 x2 x1 (a) Initial membership functions. 1 1 0.8 0.6 0.8 0.6 0.4 0.4 0.2 0.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 x2 1 2 3 4 5 6 7 8 9 10 x1 (b) Membership functions after 100 epochs of training.

A two input first-order Sugeno fuzzy model with two rules and equivalent ANFIS architecture

ANFIS: Output of Layers A two input first order Sugeno fuzzy model

Tsukamoto fuzzy model In the Tsukamoto fuzzy models, the consequent of each fuzzy if-then rule is represented by a fuzzy set with a monotonical membership function, as shown in Figure . As a result, the inferred output of each rule is defined as a crisp value induced by the rule’s firing strength. The overall output is taken as the weighted average of each rule’s output. Figure illustrates the reasoning procedure for a two-input two-rule system.

A two input two-rule Tsukamoto Fuzzy Model and equivalent

Equivalent ANFIS