Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adaptive Clustering Of Incomplete Data Using Neuro-fuzzy Kohonen Network.

Similar presentations


Presentation on theme: "Adaptive Clustering Of Incomplete Data Using Neuro-fuzzy Kohonen Network."— Presentation transcript:

1 Adaptive Clustering Of Incomplete Data Using Neuro-fuzzy Kohonen Network

2 Outline  Data with gaps clustering on the basis of neuro-fuzzy Kohonen network  Adaptive algorithm for probabilistic fuzzy clustering  Adaptive probabilistic fuzzy clustering algorithm for data with missing values  Adaptive algorithm for possibilistic fuzzy clustering  Adaptive algorithm for possibilistic fuzzy clustering of data with missing values 2/22

3 Introduction  The clustering problem for multivariate observations often encountered in many applications connected with Data Mining and Exploratory Data Analysis. Conventional approach to solving these problems requires that each observation may belong to only one cluster. There are many situations when a feature vector with different levels of probabilities or possibilities can belong to several classes. This situation is the subject of fuzzy cluster analysis, intensively developing today.  In many practical tasks of Data Mining, including clustering, data sets may contain gaps, information in which, for whatever reasons, is missing. More effective in this situation are approaches based on the mathematical apparatus of Computational Intelligence and first of all artificial neural networks and different modifications of classical fuzzy c- means (FCM) method. 3/22

4 Processing of data with gaps Data Set With Gaps Data Set With Gaps Filling in missing values ​​ using specialized algorithm Data Set Clustering 4/22

5 Algorithms for filling the missing values 5/22

6 Data with gaps clustering on the basis of neuro-fuzzy Kohonen network  The clustering of multivariate observations problem often occurs in many applications associated with Data Mining.  The traditional approach to solving these tasks requires that each observation may relate to only one cluster, although the situation more real is when the processed feature vector with different levels of probabilities or possibilities may belong more than one class. [Bezdek, 1981; Hoeppner 1999; Xu, 2009].  Notable approaches and solutions are efficient only in cases when the original array data set has batch form and does not change during the analysis. However there is enough wide class of problems when the data are fed to the processing sequentially in on-line mode as this occurs when training Kohonen self-organizing maps [Kohonen, 1995]. In this case it is not known beforehand which of the processed vector-images contains missing values.  This work is devoted to solving the problem on-line clustering of data based on the Kohonen neural network, adapted for operation in presence of overlapping classes. 6/22

7 1…p…j…n 1x 11 …x 1p …x 1j …x 1n …………………… ix i1 …x ip …x ij …x in …………………… kx k1 …x kp …x kj …x kn …………………… Nx N1 …x Np …x Nj …x Nn ”object-property“ table Adaptive algorithm for probabilistic fuzzy clustering 7/22 (1)

8 The steps algorithm  Introducing the objective function of clustering wqwq (4) (5) (3) 8/22

9 Adaptive probabilistic fuzzy clustering algorithm for data with gaps In the situation if the data in the array contain gaps, the approach discussed above should be modified accordingly. For example, in [Hathaway, 2001] it was proposed the modification of the FCM-procedure based on partial distance strategy (PDS FCM). 9/22

10 Processing of data with gaps Data Set With Gaps Data Set With Gaps Filling in missing values ​​ using specialized algorithm Data Set Clustering 10/22

11 1.Partition of original data set into classes 2. Centered3. Standardized Data set 11/22

12 Objective function of clustering here (6) (7) 12/22

13 … … Adaptive probabilistic fuzzy clustering algorithm for data with gaps (8) (9) 13/22

14 Kohonen’s “Winner – takes – more” rule Adaptive probabilistic fuzzy clustering algorithm for data with gaps (10) (11) 14/22

15 Adaptive algorithm for possibilistic fuzzy clustering The main disadvantage of probabilistic algorithms is connected with the constraints on membership levels which sum has to be equal unity. This reason has led to the creation of possibilistic fuzzy clustering algorithms [Krishnapuram, 1993]. In possibilistic clustering algorithms the objective function has the form where the scalar parameter determines the distance at which level of membership equals to 0.5 (12) (13) 15/22

16 Adaptive algorithm for possibilistic fuzzy clustering of data with gaps Adopting instead of Euclidean metric partial distance (PD), we can write the objective function as and then solving the equations system Thus, the process of fuzzy possibilistic clustering data with gaps can also be realized by using neuro-fuzzy Kohonen network. (14) (15) (16) 16/22

17 Results Iris data set: This is a benchmark data set in pattern recognition analysis, freely available at the UCI Machine Learning Repository.  It contains three clusters (types of Iris plants: Iris Setosa, Iris Versicolor and Iris Virginica) of 50 data points each, of 4 dimensions (features): sepal length, sepal width, petal length and petal width.  The class Iris Setosa is linearly separable from the other two, which are not linearly separable in their original clusters. Validation  Cluster validity refers to the problem whether a given fuzzy partition fits to the data all. The clustering algorithm always tries to find the best fit for a fixed number of clusters and the parameterized cluster shapes. However this does not mean that even the best fit is meaningful at all. Either the number of clusters might be wrong or the cluster shapes might not correspond to the groups in the data, if the data can be grouped in a meaningful way at all. Two main approaches to determining the appropriate number of clusters in data can be distinguished: 17/20

18 Results Validity measures:  Partition Coefficient (PC) - measures the amount of "overlapping" between cluster. The optimal number of cluster is at the maximum value.  Classification Entropy (CE) - it measures the fuzzyness of the cluster partition only, which is similar to the Partition Coeffcient.  Partition Index (SC) - is the ratio of the sum of compactness and separation of the clusters. It is a sum of individual cluster validity measures normalized through division by the fuzzy cardinality of each cluster. A lower value of SC indicates a better partition.  Separation Index (S) - on the contrary of partition index (SC), the separation index uses a minimum-distance separation for partition validity  Xie and Beni’s Undex (XB) - it aims to quantify the ratio of the total variation within clusters and the separation of clusters. The optimal number of clusters should minimize the value of the index. 18/20

19 Results Clustering algorithm PCSCXB Adaptive algorithm for probabilistic fuzzy clustering e Adaptive probabilistic fuzzy clustering algorithm for data with gaps e Adaptive algorithm for possibilistic fuzzy clustering e Adaptive algorithm for possibilistic fuzzy clustering of data with gaps e Fuzzy c-means Gustafson-Kessel e Gath-Geva e /20


Download ppt "Adaptive Clustering Of Incomplete Data Using Neuro-fuzzy Kohonen Network."

Similar presentations


Ads by Google