Download presentation

Presentation is loading. Please wait.

Published byElise Tufford Modified over 2 years ago

1
Fuzzy Inference Systems

2
Review Fuzzy Models If then.

3
FuzzificationDefuzzification Inferencing InputOutput Basic Configuration of a Fuzzy Logic System Target Error =Target -Output

4
Types of Rules Mamdani Assilian Model R1: If x is A 1 and y is B 1 then z is C 1 R2: If x is A 2 and y is B 2 then z is C 2 A i, B i and C i, are fuzzy sets defined on the universes of x, y, z respectively Takagi-Sugeno Model R1: If x is A 1 and y is B 1 then z =f 1 (x,y) R1: If x is A 2 and y is B 2 then z =f 2 (x,y) For example: f i (x,y)=a i x+b i y+c i

5
Types of Rules Mamdani Assilian Model Takagi-Sugeno Model

6
Mamdani Fuzzy Models

7
The Reasoning Scheme Both antecedent and consequent are fuzzy

8
The Reasoning Scheme Both antecedent and consequent are fuzzy

9
1: IF FeO is high & SiO 2 is low & Granite is prox & Fault is prox, THEN metal is highImplication (Max) 0 1 0 1 = 2: IF FeO is aver & SiO 2 is high & Granite is interm & Fault is prox, THEN metal is aver 30% 50% 70% 0 1 40% 55% 70% 0 km 10 km 20km 0 km 5 km 10km 0t 100t 1000t 3: IF FeO is low & SiO 2 is high & Granite is dist & Fault is dist, THEN metal is low FeO = 60%SiO 2 = 60% Granite = 5 km Fault = 1 km Metal = ? 0t 100t 1000t = =

10
Defuzzifier Converts the fuzzy output of the inference engine to crisp using membership functions analogous to the ones used by the fuzzifier. Five commonly used defuzzifying methods: –Centroid of area (COA) –Bisector of area (BOA) –Mean of maximum (MOM) –Smallest of maximum (SOM) –Largest of maximum (LOM) Since consequent is fuzzy, it has to be defuzzified

11
Defuzzifier

12
Rule 1: Rule 2: Rule 3: Aggregate (Max) + + = Defuzzify (Find centroid) 125 tonnes metal Formula for centroid

13
Sugeno Fuzzy Models Also known as TSK fuzzy model –Takagi, Sugeno & Kang, 1985

14
If x is A and y is B then z = f(x, y) Fuzzy Rules of TSK Model Fuzzy Sets Crisp Function f(x, y) is very often a polynomial function w.r.t. x and y. The order of a Takagi-Sugeno type fuzzy inference system = the order of the polynomial used. While antecedent is fuzzy, consequent is crisp

15
The Reasoning Scheme

16
Examples R1: if X is small and Y is small then z = x +y +1 R2: if X is small and Y is large then z = y +3 R3: if X is large and Y is small then z = x +3 R4: if X is large and Y is large then z = x + y + 2

17
TAKAGI-SUGENO SYSTEM 1.IF x is f 1x (x) AND y is f 1y (y) THEN z 1 = p 10 +p 11 x+p 12 y 2.IF x is f 2x (x) AND y is f 1y (y) THEN z 2 = p 20 +p 21 x+p 22 y 3.IF x is f 1x (x) AND y is f 2y (y) THEN z 3 = p 30 +p 31 x+p 32 y 4.IF x is f 2x (x) AND y is f 2y (y) THEN z 4 = p 40 +p 41 x+p 42 y The firing strength (= output of the IF part) of each rule is: s 1 = f 1x (x) AND f 1y (y) s 2 = f 2x (x) AND f 1y (y) s 3 = f 1x (x) AND f 2y (y) s 4 = f 2x (x) AND f 2y (y) Output of each rule (= firing strength x consequent function) : 1.o 1 = s 1 ∙ z 1 2.o 2 = s 2 ∙ z 2 3.o 3 = s 3 ∙ z 3 4.o 4 = s 4 ∙ z 4 Overall output of the fuzzy inference system is: o 1 + o 2 + o 3 + o 4 s 1 + s 2 + s 3 + s 4 z =

18
Sugeno system 18 Rule1: IF FeO is high AND SiO 2 is low AND Granite is proximal AND Fault is proximal, THEN Gold =p 1 (FeO%)+q 1 (SiO 2 %) +r 1 (Distance2Granite)+s 1 (Distance2Fault)+t 1 Rule 2: IF FeO is average AND SiO 2 is high AND Granite is intermediate AND Fault is proximal, THEN Gold =p 2 (FeO%)+q 2 (SiO 2 %)+r 2 (Distance2Granite)+s 2 (Distance2Fault)+t 2 Rule 3: IF FeO is low AND SiO 2 is high AND Granite is distal AND Fault is distal, THEN Gold =p 3 (FeO%)+q 3 (SiO 2 %)+r 3 (Distance2Granite)+s 3 (Distance2Fault)+t 3

19
Gold(R1) =p 1 (FeO%)+q 1 (SiO 2 %) + r 1 (Distance2Granite) +s 1 (Distance2Fault)+t 1 1: IF FeO is high X SiO 2 is low X Granite is prox X Fault is prox, THEN 0 1 0 1 2: IF FeO is aver X SiO 2 is high X Granite is interm X Fault is prox, THEN 30% 50% 70% 0 1 40% 55% 70% 0 km 10 km 20km 0 km 5 km 10km 3: IF FeO is low & SiO 2 is high & Granite is dist & Fault is dist, THEN FeO = 60%SiO 2 = 60% Granite = 5 km Fault = 1 km Metal = ? s1s1 Gold(R2) =p 2 (FeO%)+q 2 (SiO 2 %) + r 2 (Distance2Granite) +s 2 (Distance2Fault)+t 2 s2s2 Gold(R3) =p 3 (FeO%)+q 3 (SiO 2 %) + r 3 (Distance2Granite) +s 3 (Distance2Fault)+t 3 s3s3 Sugeno system

20
Sugeno system: Output Gold(R1) =p 1 (FeO%)+q 1 (SiO 2 %) + r 1 (Distance2Granite) +s 1 (Distance2Fault)+t 1 s1s1 Gold(R2) =p 2 (FeO%)+q 2 (SiO 2 %) + r 2 (Distance2Granite) +s 2 (Distance2Fault)+t 2 s2s2 Gold(R3) =p 3 (FeO%)+q 3 (SiO 2 %) + r 3 (Distance2Granite) +s 3 (Distance2Fault)+t 3 s3s3 Firing strength Rule output

21
A neural fuzzy system Implements FIS in the framework of NNs Fuzzification Nodes Antecedent Nodes Output Nodes xy

22
Fuzzification Nodes Represents the term sets of the features. If we have two features x and y and two linguistic variables defined on both of it say BIG and SMALL. Then we have 4 fuzzification nodes. xy BIG SMALL We use Gaussian Membership functions for fuzzification --- They are differentiable, triangular and trapezoidal membership functions are NOT differentiable.

23
Fuzzification Nodes (Contd.) and are two free parameters of the membership functions which needs to be determined How to determine and Two strategies: 1) Fixed and 2) Update and , through any tuning algorithm

24
Consequent nodes p, q and k are three free parameters of the consequent polynomial function How to determine p, q, k Two strategies: 1) Fixed 2) Update through any tuning algorithm

25
Fuzzification nodes xy BIG SMALL μ x1 μ x2 μ y1 μ y2 Antecedent nodes e.g. If x is Small & y is Small Consequent nodes w1w1 w2w2 w3w3 w4w4 e.g. z 4 = p 4 x + q 4 y + k 4 z1z1 z2z2 z3z3 z4z4 Output node O = (w 1 z 1 +w 2 z 2 +w 3 z 3 +w 4 z 4 )/ (w 1 +w 2 +w 3 +w 4 Target (t) Error = ½(t-o) 2

26
ANFIS Architecture Squares: Adaptive nodes Circles: Fixed nodes

27
ANFIS Architecture Layer 1 (Adaptive) Contains adaptive nodes, each with a Gaussian membership function: Number of nodes = number of variables x number of linguistic values In the previous example there are 4 nodes (2 variable x 2 linguistic values for each) Two parameters to be estimated per node: mean (centre) and standard deviation (spread) These are called premise parameters Number of premise parameters = 2 x number of nodes = 8 in the example

28
ANFIS Architecture Layer 2 (Fixed) Contains fixed nodes, each with product operator (T-norm operator). Returns the firing strength of each If-Then Rule. The firing strength can be normalized. In ANFIS, each node returns a normalized firing strength – Fixed nodes – no parameter to be estimated.

29
ANFIS Architecture Layer 3 (Adaptive) Each node contains an adaptive polynomial, and returns output for each fuzzy If-Then rule Number of nodes = number of If-Then Rules. The parameters ps are called consequent parameters.

30
ANFIS Architecture Layer 4 (Fixed) Sums up the output of each node in the previous layer: A single node in this layer. No parameter to be estimated.

31
ANFIS Training Linear in the consequent parameters P ki, if the premise parameters and, therefore, the firing strengths s k of the fuzzy if-then rules are fixed. ANFIS uses a hybrid learning procedure (Jang and Sun, 1995) for estimation of the premise and consequent parameters. The hybrid learning procedure estimates the consequent parameters (keeping the premise parameters fixed) in a forward pass and the premise parameters (keeping the consequent parameters fixed) in a backward pass.

32
Squares: Adaptive nodes Circles: Fixed nodes The forward pass: Propagate information forward until Layer 3 Estimate the consequent parameters by the least square estimator. The backward pass: Propagate the error signals backwards and update the premise parameters by gradient descent. ANFIS Training

33
ANFIS Training : Least Square Estimation 1.Data assembled in form of (x n ; y n ) 2.We assume that there is a linear relation between x and y: y = ax + b 3.Can be extended to n dimensions: y = a 1 x 1 + a 2 x 2 + a 3 x 3 + … + b The problem: Given the function f, ﬁnd values of coefﬁcients a i s such that the linear combination best fits the data

34
ANFIS Training : Least Square Estimation Given data {(x 1 ; y 1 (x N ; y N )}, we may deﬁne the error associated to saying y = ax + b by: This is just N times the variance of data : {y1 - (ax1+b),…., y n - ( ax N +b)} The goal is to ﬁnd values of a and b that minimize the error. In other words minimize the partial derivative of the error wrt a and b:

35
ANFIS Training : Least Square Estimation Which gives us: We may rewrite them as: The values of a and b which minimize the error satisfy the following matrix equation: Hence a and b are estimated using:

36
ANFIS Training : Least Square Estimation For the following data find least square estimator SNoXYX2X2 XY 129418 2311933 34131652 461736102 582164168 61717 729418 81127121297 91433196462 TOTAL511474511157

37
ANFIS Training : Least Square Estimation

38
ANFIS Training : Gradient descent

Similar presentations

Presentation is loading. Please wait....

OK

Curve-Fitting Regression

Curve-Fitting Regression

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on stone age tools Ppt on classical economics states Ppt on stock market indices Convert pdf to word ppt online Ppt on bridges in networking Ppt on peak load pricing lecture Ppt on waves physics class 11 Ppt on noun for class 10 Ppt on nuclear family and joint family band Ppt on different types of dance forms computer