Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Node Classification in Dynamic Content-based Networks.

Similar presentations


Presentation on theme: "On Node Classification in Dynamic Content-based Networks."— Presentation transcript:

1 On Node Classification in Dynamic Content-based Networks

2 Motivation 2 Ke Wang Jiawei Han Jian Pei Kenneth A. Ross “Data Mining” “Databases” “Clustering” “Sequential Pattern” … “Algorithms” … “Sequential Pattern” “Data Mining” “Systems” “Rules” … “Mining” “Efficient” “Association Rules” … Year 2001

3 Motivation 3 Ke Wang Jiawei Han Jian Pei “Data Mining” “Web” “Sequential Pattern” … “Pattern” “Data Mining” “Stream” “Semantics” … “Association Rules” “Data Mining” “Ranking” “Web” … Year 2002 “Parallel” “Automated” “Data” … “Pattern Mining” … “Clustering” “Distributed” “Databases” “Mining” … Marianne Winslett Xifeng Yan Philip S. Yu

4 Motivation 4 Ke Wang Jiawei Han Jian Pei “Mining” “Databases” “Clustering” “Sequential Pattern” … “Sequential Pattern” “Mining” “Systems” “Rules” … “Mining” “Efficient” “Association” … Year 2003 “Graph” “Databases” “Sequential Mining” … “Algorithms” “Association Rules” “Clustering” “Wireless” “Web” … “Clustering” “Indexing” “Knowledge” “XML” … Charu Aggarwal Xifeng Yan Philip S. Yu

5 Motivation l Networks annotated with an increasing amount of text Citation networks, co-authorship networks, product databases with large amounts of text content, etc. Highly dynamic 5 l Node classification Problem Often arises in the context of many network scenarios in which the underlying nodes are associated with content. A subset of the nodes in the network may be labeled. Can we use these labeled nodes in conjunction with the structure for the classification of nodes which are not currently labeled? l Applications

6 l Information networks are very large Scalable and efficient l Many such networks are dynamic Updatable in real time Self-adaptable and robust l Such networks are often noisy Intelligent and selective l Heterogeneous correlations in such networks Challenges 6 A A B B C C A A B B C C A A B B C C

7 Outline l Related Works l DYCOS: DYnamic Classification algorithm with cOntent and Structure Semi-bipartite content-structure transformation Classification using a series of text and link- based random walks Accuracy analysis l Experiments NetKit-SRL l Conclusion 7

8 Related Works l Link-based classification (Bhagat et al., WebKDD 2007) Local iterative Global nearest neighbor l Content-only classification (Nigam et al. Machine Learning 2000) Each object’s own attributes only l Relational classification (Sen et al., Technical Report 2004) Each object’s own attributes Attributes and known labels of the neighbors l Collective classification (Macskassy & Provost, JMLR 2007, Sen et al., Technical Report 2004, Chakrabarti, SIGMOD 1998) Local classification Flexible: ranging from a decision tree to an SVM Approximate inference algorithms Iterative classification Gibbs sampling Loopy belief propagation Relaxation labeling 8

9 Outline l Related Works l DYCOS: DYnamic Classification algorithm with cOntent and Structure Semi-bipartite content-structure transformation Classification using a series of text and link- based random walks Accuracy analysis l Experiments NetKit-SRL l Conclusion 9

10 DYCOS in A Nutshell l Node classification in a dynamic environment lDynamic network: the entire network is denoted by G t = (N t, A t, T t ) at time t. lProblem statement: lClassify the unlabeled nodes (N t \ T t ) using both the content and structure of the network for all the time stamps in an efficient and accurate manner 10 Both the structure and the content of the network change over time! t+1 t+2 t

11 l Text-augmented representation Leveraged for a random walk-based classification model that uses both links and text Two partitions: structural nodes and word nodes Semi-bipartite: one partition of nodes is allowed to have edges either within the set, or to nodes in the other partition. l Efficient updates upon dynamic changes Semi-bipartite Transformation 11

12 l Random walks over augmented structure Starting node: the unlabeled node to be classified. Structural hop A random jump from a structural node to one of its neighbors Content-based multi-hop A jump from a structural node to another through implicit common word nodes Structural parameter: p s l Classification Classify the starting node with the most frequently encountered class label during the random walks Random Walk-Based Classification 12

13 l Discriminative keywords A set M t of the top m words with the highest discriminative power are used to construct the word node partition. Gini-index The value of G(w) lies in the range (0, 1). Words with a higher value of gini-index are more discriminative for classification purposes. l Inverted lists Inverted list of keywords for each node Inverted list of nodes for each keyword Gini-Index & Inverted Lists 13

14 l Probabilistic bound: multi-class classification k classes {C 1, C 2, …, C k } b-accurate Pr[b-accurate] ≥ 1 - (k-1)exp{-lb 2 /2} Analysis 14 l Why do we care? DYCOS is essentially using Monte-Carlo sampling to sample various paths from each unlabeled node. Advantage: fast approach Disadvantage: loss of accuracy Can we present analysis on how accurate DYCOS sampling is? l Probabilistic bound: bi-class classification Two classes C 1 and C 2 E[Pr[C 1 ]] = f 1, E[Pr[C 2 ]] = f 2, f 1 - f 2 = b ≥ 0 Pr[mis-classification] ≤ exp{-lb 2 /2}

15 Outline l Related Works l DYCOS: DYnamic Classification algorithm with cOntent and Structure Semi-bipartite content-structure transformation Classification using a series of text and link- based random walks Accuracy analysis l Experiments NetKit-SRL l Conclusion 15

16 Experimental Results l Data sets CORA: a set of research papers and the citation relations among them. Each node is a paper and each edge is a citation relation. A total of 12,313 English words are extracted from the paper titles. We segment the data into 10 synthetic time periods. DBLP: a set of authors and their collaborations Each node is an author and each edge is a collaboration. A total of 194 English words in the domain of computer science are used. We segment the data into 36 annual graphs from year 1975 to year 2010. 16

17 Experimental Results 17 l NetKit-SRL toolkit An open-source and publicly available toolkit for statistical relational learning in networked data (Macskassy and Provost, 2007). Instantiations of previous relational and collective classification algorithms Configuration Local classifier: domain-specific class prior Relational classifier: network-only multinomial Bayes classifier Collective inference: relaxation labeling l Parameters 1) The number of most discriminative words, m; 2) The size constraint of the inverted list for each keyword a; 3) The number of top content-hop neighbors, q; 4) The number of random walks, l; 5) The length of each random walk, h; 6) Structure parameter, p s. The results demonstrate that DYCOS improves the classification accuracy over NetKit by 7.18% to 17.44%, while reducing the runtime to only 14.60% to 18.95% of that of NetKit.

18 Experimental Results 18 Classification Accuracy Comparison Classification Time Comparison DYCOS vs. NetKit on CORA

19 Experimental Results 19 Sensitivity to m, l and h (a=30, p s =70%) Sensitivity to a, m and p s (l=3, h=5) Parameter Sensitivity of DYCOS CORA Data DBLP Data

20 Experimental Results 20 Dynamic Updating Time: DBLP Time Period12345 Update Time (Sec.)0.0190.0130.0150.0130.023 Time Period678910 Update Time (Sec.)0.0150.014 0.0130.011 Dynamic Updating Time: CORA Time Period1975-19891990-19992000-2010 Update Time (Sec.)0.031070.226711.00154

21 Outline l Related Works l DYCOS: DYnamic Classification algorithm with cOntent and Structure Semi-bipartite content-structure transformation Classification using a series of text and link- based random walks Accuracy analysis l Experiments NetKit-SRL l Conclusion 21

22 Conclusion l We propose an efficient, dynamic and scalable method for node classification in dynamic networks. l We provide analysis on how accurate the proposed method will be in practice. l We present experimental results on real data sets, and show that our algorithms are more effective and efficient than competing algorithms. 22

23 23

24 Challenges Information networks are very large Scalable and efficient Many such networks are dynamic Updatable in real time Self-adaptable and robust Such networks are often noisy Intelligent and selective Heterogeneous correlations in such networks The correlations between the label of o and the content of o The correlations between the label of o and the contents and labels of objects in the neighborhood of o The correlations between the label of o and the unobserved labels of objects in the neighborhood of o 24

25 Analysis Lemma Consider two classes with expected visit probabilities of f1 and f2 respectively, such that b = f1−f2 > 0. Then, the probability that the class which is visited the most during the sampled random walk process is reversed to class 2, is given by at most Definition Consider the node classification problem with a total of k classes. We define the sampling process to be b-accurate, if none of the classes whose expected visit probability is less than b of the class with the largest expected visit probability turns out have the largest sampled visit probability. Theorem The probability that the sampling process results in a b-accurate reported majority class is given by at least 25

26 Experimental Results 26 Classification accuracy comparison: DBLP Classification time comparison: DBLP

27 Experimental Results 27 Sensitivity to a, l and h Sensitivity to m, l and h Sensitivity to m, a and p s Sensitivity to a, m and p s


Download ppt "On Node Classification in Dynamic Content-based Networks."

Similar presentations


Ads by Google