Presentation is loading. Please wait.

Presentation is loading. Please wait.

Incorporating User Provided Constraints into Document Clustering Yanhua Chen, Manjeet Rege, Ming Dong, Jing Hua, Farshad Fotouhi Department of Computer.

Similar presentations


Presentation on theme: "Incorporating User Provided Constraints into Document Clustering Yanhua Chen, Manjeet Rege, Ming Dong, Jing Hua, Farshad Fotouhi Department of Computer."— Presentation transcript:

1 Incorporating User Provided Constraints into Document Clustering Yanhua Chen, Manjeet Rege, Ming Dong, Jing Hua, Farshad Fotouhi Department of Computer Science Wayne State University Detroit, MI48202 {chenyanh, rege, mdong, jinghua, fotouhi}@wayne.edu

2 Outline Introduction Overview of related work Semi-Supervised Non-negative Matrix Factorization (SS- NMF) for document clustering Theoretical result for SS-NMF Experiments and results Conclusion

3 What is clustering? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized

4 Document Clustering Grouping of text documents into meaningful clusters in an unsupervised manner. Government Science Arts

5 Unsupervised Clustering Example....................................

6 Semi-supervised clustering: problem definition Input: –A set of unlabeled objects –A small amount of domain knowledge (labels or pairwise constraints) Output: –A partitioning of the objects into k clusters Objective: –Maximum intra-cluster similarity –Minimum inter-cluster similarity –High consistency between the partitioning and the domain knowledge

7 According to different given domain knowledge: –Users provide class labels (seeded points) a priori to some of the documents –Users know about which few documents are related (must-link) or unrelated (cannot-link) Semi-Supervised Clustering Seeded points Must-link Cannot-link

8 Why semi-supervised clustering? Large amounts of unlabeled data exists –More is being produced all the time Expensive to generate Labels for data –Usually requires human intervention Use human input to provide labels for some of the data –Improve existing naive clustering methods –Use labeled data to guide clustering of unlabeled data –End result is a better clustering of data Potential applications –Document/word categorization –Image categorization –Bioinformatics (gene/protein clustering)

9 Outline Introduction Overview of related work Semi-supervised Non-negative Matrix Factorization (SS- NMF) for document clustering Theoretical work for SS-NMF Experiments and results Conclusion

10 Clustering Algorithm Document hierarchical clustering –Bottom-up, agglomerative –Top-down, divisive Document partitioning (flat clustering) –K-means –probabilistic clustering using the Naïve Bayes or Gaussian mixture model, etc. Document clustering based on graph model

11 Semi-supervised Clustering Algorithm Semi-supervised Clustering with labels (Partial label information is given ) : –SS-Seeded-Kmeans ( Sugato Basu, et al. ICML 2002) -SS-Constraint-Kmeans ( Sugato Basu, et al. ICML 2002) Semi-supervised Clustering with Constraints (Pairwise Constraints (Must-link, Cannot-link) is given): –SS-COP-Kmeans (Wagstaff et al. ICML01) –SS-HMRF-Kmeans (Sugato Basu, et al. ACM SIGKDD 2004) –SS-Kernel-Kmeans (Brian Kulis, et al. ICML 2005) –SS-Spectral-Normalized-Cuts (X. Ji, et al. ACM SIGIR 2006)

12 Overview of K-means Clustering K-means is a partition clustering algorithm based on iterative relocation that partitions a dataset into k clusters. Objective function: Locally minimizes sum of squared distance between the data points and their corresponding cluster centers: Algorithm: Initialize k cluster centers randomly. Repeat until convergence: –Cluster Assignment Step: Assign each data point x i to the cluster f h such that distance of x i from center of f h is minimum –Center Re-estimation Step: Re-estimate each cluster center as the mean of the points in that cluster

13 Semi-supervised Kernel K-means (SS-KK) [Brian Kulis, et al. ICML 2005] Semi-supervised Kernel K-means algorithm : where is kernel function mapping from, is centroid, is the cost of violating the constraint between two points –First term: kernel k-means objective function –Second term: reward function for satisfying must-link constraints –Third term: penalty function for violating cannot-link constraints

14 Overview of Spectral Clustering Spectral clustering is a graph-theoretic clustering algorithm Weighted Graph G=(V, E, A) min between-cluster similarities (weights : A ij )

15 Spectral Normalized Cuts Min similarity between & : Balance weights: Cluster indicator: Graph partition becomes: Solution is eigenvector of:

16 Semi-supervised Spectral Normalized Cuts (SS-SNC) [X. Ji, et al. ACM SIGIR 2006] Semi-supervised Spectral Learning algorithm : where, –First term: spectral normalized cut objective function –Second term: reward function for satisfying must-link constraints –Third term: penalty function for violating cannot-link constraints

17 Outline Introduction Related work Semi-Supervised Non-negative Matrix Factorization (SS-NMF) for document clustering –NMF review –Model formulation and algorithm derivation Theoretical result for SS-NMF Experiments and results Conclusion

18 Non-negative Matrix Factorization (NMF) NMF is to decompose matrix into two parts( D. Lee et al., Nature 1999 ) Symmetric NMF for clustering ( C. Ding et al. SIAM ICDM 2005 ) XF G ~=~= min || X – FG T || 2 ~=~= xx min || A – GSG T || 2

19 SS-NMF Incorporate prior knowledge into NMF based framework for document clustering. Users provide pairwise constraints: –Must-link constraints C ML : two documents d i and d j must belong to the same cluster. –Cannot-link constraints C CL : two documents d i and d j must belong to the different cluster. Constraints are defined by associated violation cost matrix W: –W reward : cost of violating the constraint between document d i and d j if a constraint exists. –W penalty : cost of violating the constraints between document d i and d j if a constraint exists.

20 SS-NMF Algorithm Define the objective function of SS-NMF: where is the cluster label of

21 Summary of SS-NMF Algorithm

22 Outline Introduction Overview of related work Semi-supervised Non-negative Matrix Factorization (SS- NMF) for document clustering Theoretical result for SS-NMF Experiments and results Conclusion

23 Algorithm Correctness and Convergence Based on constraint optimization theory, auxiliary function, we can prove SS-NMF: 1.Correctness: Solution converges to local minimum 2. Convergence: Iterative algorithm converges (Details in paper [1], [2]) [1] Y. Chen, M. Rege, M. Dong and J. Hua, “Incorporating User provided Constraints into Document Clustering”, Proc. of IEEE ICDM, Omaha, NE, October 2007. (Regular Paper, acceptance rate 7.2% ) [2] Y. Chen, M. Rege, M. Dong and J. Hua, “Non-negative Matrix Factorization for Semi-supervised Data Clustering”, Journal of Knowledge and Information Systems, to appear, 2008.

24 SS-NMF: General Framework for Semi-supervised Clustering Proof: (1) (2) (3) Orthogonal Symmetric Semi-supervised NMF is equivalent to Semi-supervised Kernel K-means (SS-KK) and Semi-supervised Spectral Normalized Cuts (SS-SNC)!

25 Advantages of SS-NMF SS-KKSS-SNCSS-NMF Clustering Indicator Hard clustering Exact orthogonal The derived latent semantic space to be orthogonal No direct relationship between the singular vectors and the clusters Soft clustering Map the documents into non-negative latent semantic space which may not be orthogonal Cluster label can be determined by the axis with the largest projection value Time Complexity Iterative algorithm Solving a computationally expensive constrained eigen- decomposition Iterative algorithm to obtain partial answer at intermediate stages of the solution by specifying a fixed number of iterations Simple basic matrix computation and easily deployed over a distributed computing environment when dealing with large document collections.

26 Outline Introduction Overview of related work Semi-supervised Non-negative Matrix Factorization (SS-NMF) for document clustering Theoretical result for SS-NMF Experiments and results –Artificial Toy Data –Real Data Conclusion

27 Experiments on Toy Data 1. Artificial toy data: consisting of two natural clusters

28 Results on Toy Data (SS-KK and SS-NMF) Right Table: Difference between cluster indicator G of SS-KK (hard clustering) and SS-NMF (soft clustering) for the toy data Hard Clustering: Each object belongs to a single cluster Soft Clustering: Each object is probabilistically assigned to clusters.

29 Results on Toy Data (SS-SNC and SS-NMF) (b) Data distribution in the SS-NMF subspace of two column vectors of G. The data points from the two clusters get distributed along the two axes. (a) Data distribution in the SS-SNC subspace of the first two singular vectors. There is no relationship between the axes and the clusters.

30 Time Complexity Analysis Up Figure: Computational Speed comparison for SS-KK, SS- SNC and SS-NMF ( )

31 Experiments on Text Data 2. Summary of data sets [1] used in the experiments. [1]http://www.cs.umn.edu/~han/data/tmdata.tar.gz Evaluation Metric: where n is the total number of documents in the experiment, δis the delta function that equals one if, is the estimated label, is the ground truth.

32 Results on Text Data (Compare with Unsupervised Clustering) (1) Comparison with unsupervised clustering approaches: Note: SS-NMF adds 3% constraints

33 Results on Text Data (Before Clustering and After Clustering) (a) Typical document- document matrix before clustering (b) Document- document similarity matrix after clustering with SS-NMF (k=2) (c) Document- document similarity matrix after clustering with SS-NMF (k=5)

34 Results on Text Data (Clustering with Different Constraints) Left Table: Comparison of confusion matrix C and normalized cluster centroid matrix S of SS-NMF for different percentage of documents pairwise constrained

35 Results on Text Data (Compare with Semi-supervised Clustering) (2) Comparison with SS-KK and SS-SNC (a) Graft-Phos (b) England-Heart(c) Interest-Trade

36 Comparison with SS-KK and SS-SNC ( Fbis2, Fbis3, Fbis4, Fbis5) Results on Text Data (Compare with Semi-supervised Clustering)

37 Experiments on Image Data Up Figure: Sample images for images categorization. (From up to down: O-Owls, R-Roses, L-Lions, E-Elephants, H-Horses) 3. Image data sets [2] used in the experiments. [2] http://kdd.ics.uci.edu/databases/CorelFeatures/CorelFeatures.data.html

38 Results on Image Data (Compare with Unsupervised Clustering) Up Table : Comparison of image clustering accuracy between KK, SNC, NMF and SS-NMF with only 3% pair-wise constraints on the images. It shows that SS-NMF consistently outperforms other well-established unsupervised image clustering methods. (1) Comparison with unsupervised clustering approaches:

39 Results on Image Data (Compare with Semi-supervised Clustering) (2) Comparison with SS-KK and SS-SNC: Left Figure: Comparison of image clustering accuracy between SS-KK, SS-SNC, and SS- NMF for different percentages of images pairs constrained (a) O-R, (b) L-H, (c) R-L, (d) O-R-L.

40 Results on Image Data (Compare with Semi-supervised Clustering) (2) Comparison with SS-KK and SS-SNC: Left Figure: Comparison of image clustering accuracy between SS-KK, SS- SNC, and SS-NMF for different percentages of images pairs constrained (e) L-E-H, (f) O-R-L-E, (g) O-L-E-H, (h) O-R-L- E-H

41 Outline Introduction Related work Semi-supervised Non-negative Matrix Factorization (SS- NMF) for document clustering Theoretical result for SS-NMF Experiments and results Conclusion

42 Semi-supervised Clustering: - many real world applications - outperform the traditional clustering algorithms Semi-supervised NMF algorithm provides a unified mathematic framework for semi-supervised clustering. Many existing semi-supervised clustering algorithms can be extended to achieve multi-type objects co-clustering tasks.

43 Reference [1] Y. Chen, M. Rege, M. Dong and F. Fotouhi, “Deriving Semantics for Image Clustering from Accumulated User Feedbacks”, Proc. of ACM Multimedia, Germany, 2007. [2] Y. Chen, M. Rege, M. Dong and J. Hua, “Incorporating User provided Constraints into Document Clustering”, Proc. of IEEE ICDM, Omaha, NE, October 2007. (Regular Paper, acceptance rate 7.2%) [3] Y. Chen, M. Rege, M. Dong and J. Hua, “Non-negative Matrix Factorization for Semi-supervised Data Clustering”, Journal of Knowledge and Information Systems, invited as a best paper of ICDM 07, to appear 2008.

44


Download ppt "Incorporating User Provided Constraints into Document Clustering Yanhua Chen, Manjeet Rege, Ming Dong, Jing Hua, Farshad Fotouhi Department of Computer."

Similar presentations


Ads by Google