Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rough Sets, Their Extensions and Applications 1.Introduction  Rough set theory offers one of the most distinct and recent approaches for dealing with.

Similar presentations


Presentation on theme: "Rough Sets, Their Extensions and Applications 1.Introduction  Rough set theory offers one of the most distinct and recent approaches for dealing with."— Presentation transcript:

1 Rough Sets, Their Extensions and Applications 1.Introduction  Rough set theory offers one of the most distinct and recent approaches for dealing with incomplete or imperfect knowledge.  Rough set has resulting in various extensions to the original theory and increasingly widening field of application.  In this paper Concise overview of the basic ideas of rough set theory, Its major extensions 2. Rough set theory  Rough set theory (RST) is an extension of conventional set theory that supports of approximations in decision making.  A rough set is itself the approximation of a vague concept (set) a pair of precise concepts, called lower and upper approximations. 1ISA Lab., CU, Korea

2  The lower approximation is a descriptions of the domain objects which are known with certainty to belong to the subset of interest.  The upper approximation is a description of the objects which possibly belong to the subset. 2.1 Information and decision systems  An information system can be viewed as a table of data, consisting of objects (rows in the table) and attributes (columns).  An information system may be extended by the inclusion of decision attributes.  Table 1: example of decision system xUxU abcd  e 0SRTTR 1RSSST 2TRRSS 3SSRTT 4SRTRS 5TTRSS 6TSSST 7RSSRS ISA Lab., CU, Korea

3 3 The table consists of four conditional features (a, b, c, d), a decision feature (e) and eight objects  I=(U, A) U is a non-empty set of finite objects (the universe of discourse) A is a non-empty finite set of attributes such that a: U  V a for every a  A. V a is the set of values that attribute a may take. 2.2 Indiscernibility  With any P  A there is an associated equivalence relation IND(P):  The partition of U, determined by IND(P) is denoted U/IND(P) or U/P, which is simply the set of equivalence classes generated by IND(P): Where,

4 ISA Lab., CU, Korea4  The equivalence classes of the indiscernibility relation with respect to P are denoted [x] P, x  U.  Example, P={b, c} U/IND(P)=U/IND(b)  U/IND(c)={{0, 2, 4}, {1, 3, 6, 7}, {5}}  {{2, 3, 5}, {1, 6, 7}, {0, 4}}={{2}, {0, 4}, {3}, {1, 6, 7}, {5}}. 2.3 Lower and upper approximations  Let X  U.  X can be approximated using only the information contained within P by constructing the P-lower and P-upper approximations of the classical crisp set X:  It is that a tuple that is called a rough set. Consider the approximation of concept X in Fig. 1. Each square in the diagram represents an equivalence class, generated by indiscernibility between object values.

5 ISA Lab., CU, Korea5  Fig 1. A rough set 2.4 Positive, negative and boundary regions  Let P and Q be equivalence relations over U, then the positive, negative and boundary regions are defined as

6 ISA Lab., CU, Korea6 The positive region comprises all objects of U that can be classified to classes of U/Q using the information contained within attributes P. The boundary region is the set of objects that can be possibly, but also certainly, be classified in this way. The negative region is the set of objects that cannot be classified to classes of U/Q.  For example, let P={b, c} and Q={e} then 2.5 Attribute dependency and significance  An important issue in data analysis is discovering dependencies between attributes.  A set of attributes Q depends totally on a set of attributes P, denoted P  Q, if all attribute values from Q are uniquely determined by values of attributes from P.

7 ISA Lab., CU, Korea7  In rough set theory, dependency is defined in the following way: For P, Q  A, it is said that Q depends on P in a degree k (0  k  1), denoted P  k Q, if where |S| stands for the cardinality of the set S.  In the example, the degree of dependency of attribute {e} from the attributes {b, c} is  Given P, Q and an attribute a  P, the significance of attribute a upon Q is defined by For example, if P={a, b, c} and Q={e} then

8 ISA Lab., CU, Korea8  And calculating the significance of the three attributes gives From this it follows that attribute a is indispensable, but attributes b and c can be dispensed with when considering the dependency between the decision attribute and the given individual conditional attributes.

9 ISA Lab., CU, Korea9 2.4 Reducts  To search for a minimal representation of the original dataset, the concept of a reduct is introduced and defined as a minimal subset R of the initial attributes set C such that for a given set of attribute D,.  R is a minimal subset if for all a  R. This means that no attributes can be removed from the subset without affecting the dependency degree.  The collection of all reducts is denoted by  The intersection of all the sets in R all is called the core, the elements of which are those attributes that cannot be eliminated without introducing more contradictions to the representation of the data set.  The QuickReduct algorithm attempts to calculate reducts for a decision problem.

10 ISA Lab., CU, Korea10 QuickReduct(C,D) C: the set of all conditional attributes; D: the set of decision attributes. 1) 2) 3) 4) 5) 6) 7) 8) 9) R  { } Do T  R  x  (C-R) if  R  {x} (D) >  T (D) T  R  {x} R  T Until  R (D) ==  T (D) Return R 2.7 Discernibility matrix Many applications of rough sets make use of discernibility matrices for finding rules or reducts. A discernibility matrix of a decision table is a symmetric |U|  |U| matrix with entries defined by Each c ij contains those attributes that differ between objects i and j.

11 ISA Lab., CU, Korea11  Table 2. The decision-relative discernibility matrix XUXU 01234567 0 1a,b,c,d 2a,c,da,b,c 3b,ca,b,d 4da,b,c,db,c,d 5a,b,c,da,b,ca,b,d 6a,b,c,db,ca,b,,c, d b,c 7a,b,c,dda,c,da,d Grouping all entries containing single attributes forms the core of the dataset (those attributes appearing in every reduct). Here, the core of dataset is {d}. A discernibility function F D is a boolean function of m boolean variables a defined as below: where

12 ISA Lab., CU, Korea12 The decision-relative discernibility function is Further simplification can be performed by removing those cluses that are subsumed by others: Hence, the minimal reducts are {b, d} and {c, d}.


Download ppt "Rough Sets, Their Extensions and Applications 1.Introduction  Rough set theory offers one of the most distinct and recent approaches for dealing with."

Similar presentations


Ads by Google