Presentation is loading. Please wait.

Presentation is loading. Please wait.

Predictively Modeling Social Text William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer Science Carnegie Mellon.

Similar presentations


Presentation on theme: "Predictively Modeling Social Text William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer Science Carnegie Mellon."— Presentation transcript:

1 Predictively Modeling Social Text William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer Science Carnegie Mellon University Joint work with: Amr Ahmed, Andrew Arnold, Ramnath Balasubramanyan, Frank Lin, Matt Hurst (MSFT), Ramesh Nallapati, Noah Smith, Eric Xing, Tae Yano

2 Newswire Text Formal Primary purpose: –Inform “typical reader” about recent events Broad audience: –Explicitly establish shared context with reader –Ambiguity often avoided Informal Many purposes: –Entertain, connect, persuade… Narrow audience: –Friends and colleagues –Shared context already established –Many statements are ambiguous out of social context Social Media Text

3 Newswire Text Goals of analysis: –Extract information about events from text –“Understanding” text requires understanding “typical reader” conventions for communicating with him/her Prior knowledge, background, … Goals of analysis: –Very diverse –Evaluation is difficult And requires revisiting often as goals evolve –Often “understanding” social text requires understanding a community Social Media Text

4 Outline Tools for analysis of text –Probabilistic models for text, communities, and time Mixture models and LDA models for text LDA extensions to model hyperlink structure LDA extensions to model time –Alternative framework based on graph analysis to model time & community Preliminary results & tradeoffs Discussion of results & challenges

5 Introduction to Topic Models Multinomial Naïve Bayes C W1W1 W2W2 W3W3 ….. WNWN  M  football ThePittsburghSteelers …..   won Box is shorthand for many repetitions of the structure….

6 Introduction to Topic Models Multinomial Naïve Bayes C W1W1 W2W2 W3W3 ….. WNWN  M  politics ThePittsburghmayor …..   stated

7 Introduction to Topic Models Naïve Bayes Model: Compact representation C W1W1 W2W2 W3W3 ….. WNWN C W N  M M   

8 Introduction to Topic Models Multinomial Naïve Bayes C W1W1 W2W2 W3W3 ….. WNWN  M  For each document d = 1, , M Generate C d ~ Mult( ¢ |  ) For each position n = 1, , N d Generate w n ~ Mult(¢| ,C d ) For document d = 1 Generate C d ~ Mult( ¢ |  ) = ‘football’ For each position n = 1, , N d =67 Generate w 1 ~ Mult(¢| ,C d ) = ‘the’ Generate w 2 = ‘Pittsburgh’ Generate w 3 = ‘Steelers’ ….

9 Introduction to Topic Models Multinomial Naïve Bayes C W1W1 W2W2 W3W3 ….. WNWN  M  In the graphs: shaded circles are known values parents of variable W are the inputs to the function used in generating W. Goal: given known values, estimate the rest, usually to maximize the probability of the observations:

10 Introduction to Topic Models Mixture model: unsupervised naïve Bayes model C W N M   Joint probability of words and classes: But classes are not visible: Z

11 Introduction to Topic Models Learning for naïve Bayes: –Take logs, the function is convex, linear and easy to optimize for any parameter Learning for mixture model: –Many local maxima (at least one for each permutation of classes) –Expectation/maximization is most common method

12 Introduction to Topic Models Mixture model: EM solution E-step: M-step:

13 Introduction to Topic Models Mixture model: EM solution E-step: M-step: Estimate the expected values of the unknown variables (“soft classification”) Maximize the values of the parameters subject to this guess—usually, this is learning the parameter values given the “soft classifications”

14 Introduction to Topic Models

15 Probabilistic Latent Semantic Analysis Model d z w  M Select document d ~ Mult(  ) For each position n = 1, , N d generate z n ~ Mult( ¢ |  d ) generate w n ~ Mult( ¢ |  z n ) dd  N Topic distribution Mixture model: each document is generated by a single (unknown) multinomial distribution of words, the corpus is “mixed” by  PLSA model: each word is generated by a single unknown multinomial distribution of words, each document is mixed by  d

16 Introduction to Topic Models JMLR, 2003

17 Introduction to Topic Models Latent Dirichlet Allocation z w  M  N  For each document d = 1, ,M Generate  d ~ Dir(¢ |  ) For each position n = 1, , N d generate z n ~ Mult( ¢ |  d ) generate w n ~ Mult( ¢ |  z n )

18 Introduction to Topic Models LDA’s view of a document

19 Introduction to Topic Models LDA topics

20 Introduction to Topic Models Latent Dirichlet Allocation –Overcomes some technical issues with PLSA PLSA only estimates mixing parameters for training docs –Parameter learning is more complicated: Gibbs Sampling: easy to program, often slow Variational EM

21 Introduction to Topic Models Perplexity comparison of various models Unigram Mixture model PLSA LDA Lower is better

22 Introduction to Topic Models Prediction accuracy for classification using learning with topic-models as features Higher is better

23 Outline Tools for analysis of text –Probabilistic models for text, communities, and time Mixture models and LDA models for text LDA extensions to model hyperlink structure LDA extensions to model time –Alternative framework based on graph analysis to model time & community Preliminary results & tradeoffs Discussion of results & challenges

24 Hyperlink modeling using LDA

25 Hyperlink modeling using LinkLDA [Erosheva, Fienberg, Lafferty, PNAS, 2004] z w  M  N  For each document d = 1, ,M Generate  d ~ Dir(¢ |  ) For each position n = 1, , N d generate z n ~ Mult( ¢ |  d ) generate w n ~ Mult( ¢ |  z n ) For each citation j = 1, , L d generate z j ~ Mult(. |  d ) generate c j ~ Mult(. |  z j ) z c  L Learning using variational EM

26 Hyperlink modeling using LDA [Erosheva, Fienberg, Lafferty, PNAS, 2004]

27 Newswire Text Goals of analysis: –Extract information about events from text –“Understanding” text requires understanding “typical reader” conventions for communicating with him/her Prior knowledge, background, … Goals of analysis: –Very diverse –Evaluation is difficult And requires revisiting often as goals evolve –Often “understanding” social text requires understanding a community Social Media Text Science as a testbed for social text: an open community which we understand

28 Modeling Citation Influences [Dietz, Bickel, Scheffer, ICML 2007] Copycat model of citation influence LDA model for cited papers Extended LDA model for citing papers For each word, depending on coin flip c, you might chose to copy a word from a cited paper instead of generating the word

29 Modeling Citation Influences [Dietz, Bickel, Scheffer, ICML 2007] Citation influence graph for LDA paper

30 Models of hypertext for blogs [ICWSM 2008] Ramesh Nallapati me Amr Ahmed Eric Xing

31 LinkLDA model for citing documents Variant of PLSA model for cited documents Topics are shared between citing, cited Links depend on topics in two documents Link-PLSA-LDA

32 Experiments 8.4M blog postings in Nielsen/Buzzmetrics corpus –Collected over three weeks summer 2005 Selected all postings with >=2 inlinks or >=2 outlinks –2248 citing (2+ outlinks), 1777 cited documents (2+ inlinks) –Only 68 in both sets, which are duplicated Fit model using variational EM

33 Topics in blogs Model can answer questions like : which blogs are most likely to be cited when discussing topic z?

34 Topics in blogs Model can be evaluated by predicting which links an author will include in a an article Lower is better Link-PLDA-LDA Link-LDA

35 Another model: Pairwise Link-LDA z w   N  z w  N z z c  LDA for both cited and citing documents Generate an indicator for every pair of docs Vs. generating pairs of docs Link depends on the mixing components (  ’s) stochastic block model

36 Pairwise Link-LDA supports new inferences… …but doesn’t perform better on link prediction

37 Outline Tools for analysis of text –Probabilistic models for text, communities, and time Mixture models and LDA models for text LDA extensions to model hyperlink structure –Observation: these models can be used for many purposes… LDA extensions to model time –Alternative framework based on graph analysis to model time & community Discussion of results & challenges

38 Predicting Response to Political Blog Posts with Topic Models [NAACL ’09] Tae Yano Noah Smith

39 39 Political blogs and and comments Comment style is casual, creative, less carefully edited Posts are often coupled with comment sections

40 Political blogs and comments Most of the text associated with large “A-list” community blogs is comments –5-20x as many words in comments as in text for the 5 sites considered in Yano et al. A large part of socially-created commentary in the blogosphere is comments. –Not blog  blog hyperlinks Comments do not just echo the post

41 Modeling political blogs Our political blog model: CommentLDA D = # of documents; N = # of words in post; M = # of words in comments z, z` = topic w = word (in post) w`= word (in comments) u = user

42 Modeling political blogs Our proposed political blog model: CommentLDA LHS is vanilla LDA D = # of documents; N = # of words in post; M = # of words in comments

43 Modeling political blogs Our proposed political blog model: CommentLDA RHS to capture the generation of reaction separately from the post body Two separate sets of word distributions D = # of documents; N = # of words in post; M = # of words in comments Two chambers share the same topic-mixture

44 Modeling political blogs Our proposed political blog model: CommentLDA User IDs of the commenters as a part of comment text generate the words in the comment section D = # of documents; N = # of words in post; M = # of words in comments

45 Modeling political blogs Another model we tried: CommentLDA This is a model agnostic to the words in the comment section! D = # of documents; N = # of words in post; M = # of words in comments Took out the words from the comment section! The model is structurally equivalent to the LinkLDA from (Erosheva et al., 2004)

46 46 Topic discovery - Matthew Yglesias (MY) site

47 47 Topic discovery - Matthew Yglesias (MY) site

48 48 Topic discovery - Matthew Yglesias (MY) site

49 49 LinkLDA and CommentLDA consistently outperform baseline models Neither consistently outperforms the other. Comment prediction 20.54 % 16.92 % 32.06 % Comment LDA (R) Link LDA (R) Link LDA (C) user prediction: Precision at top 10 From left to right: Link LDA(-v, -r,-c) Cmnt LDA (-v, -r, -c), Baseline (Freq, NB) (CB) (MY) (RS)

50 From Episodes to Sagas: Temporally Clustering News Via Social- Media Commentary [current work] Noah Smith Matthew HurstFrank Lin Ramnath Balasubramanyan

51 Motivation News-related blogosphere is driven by recency Some recent news is better understood based on context of sequence of related stories Some readers have this context – some don’t To reconstruct the context, reconstruct the sequence of related stories (“saga”) –Similar to retrospective event detection First efforts: –Find related stories –Cluster by time Evaluation: agreement with human annotators

52 Clustering results on Democratic-primary- related documents SpeCluster + time: Mixture of multinomials + model for “general” text + timestamp from Gaussian k-walks (more later)

53 Clustering results on Democratic-primary- related documents Also had three human annotators build gold-standard timelines hierarchical annotated with names of events, times, … Can evaluate a machine-produced timeline by tree-distance to gold- standard one

54 Clustering results on Democratic-primary- related documents Issue: divergence of opinion with human annotators is modeling community interests the problem? how much of what we want is actually in the data? should this task be supervised or unsupervised?

55 More sophisticated time models Hierarchical LDA Over Time model: –LDA to generate text –Also generate a timestamp for each document from topic-specific Gaussians –“Non-parametric” model Number of clusters is also generated (not specified by user) –Allows use of user-provided “prototypes” –Evaluated on liberal/conservative blogs and ML papers from NIPS conferences Ramnath Balasubramanyan

56 Results with HOTS model - unsupervised

57 Results with HOTS model – human guidance Adding human seeds for some key events improves performance on all events. Allows a user to partially specify a timeline of events and have the system complete it.

58 Outline Tools for analysis of text –Probabilistic models for text, communities, and time Mixture models and LDA models for text LDA extensions to model hyperlink structure LDA extensions to model time –Alternative framework based on graph analysis to model time & community Preliminary results & tradeoffs Discussion of results & challenges

59 Spectral Clustering: Graph = Matrix Vector = Node  Weight H ABCDEFGHIJ A_111 B1_1 C11_ D_11 E1_1 F111_ G_11 H_11 I11_1 J111_ A B C F D E G I J A A3 B2 C3 D E F G H I J M M v

60 Spectral Clustering: Graph = Matrix M*v 1 = v 2 “propogates weights from neighbors” H ABCDEFGHIJ A_111 B1_1 C11_ D_11 E1_1 F11_ G_11 H_11 I11_1 J111_ A B C F D E G I J A3 B2 C3 D E F G H I J M M v1v1 A2*1+3*1+0 *1 B3*1+3*1 C3*1+2*1 D E F G H I J v2v2 *=

61 Spectral Clustering: Graph = Matrix M*v 1 = v 2 “propogates weights from neighbors” H ABCDEFGHIJ A_111 B1_1 C11_ D_11 E1_1 F11_ G_11 H_11 I11_1 J111_ A B C F D E G I J A3 B2 C3 D E F G H I J M M v1v1 A5 B6 C5 D E F G H I J v2v2 *=

62 Spectral Clustering: Graph = Matrix W*v 1 = v 2 “propogates weights from neighbors” M [Shi & Meila, 2002] e2e2 e3e3 -0.4-0.200.2 -0.4 -0.2 0.0 0.2 0.4 x x x x x x y yy y y x x x x x x z z z z z z z z z z z e1e1 e2e2

63 Spectral Clustering M If W is row-normalized adjacency matrix for a connected graph with k closely-connected subcommunities then the “top” eigenvector is a constant vector the next k eigenvectors are roughly piecewise constant with “pieces” corresponding to subcommunities Spectral clustering: Find the top k+1 eigenvectors v 1,…,v k+1 Discard the “top” one Replace every node a with k-dimensional vector x a = Cluster with k-means

64 Spectral Clustering: Pros and Cons Elegant, and well-founded mathematically Works quite well when relations are approximately transitive (like similarity, social connections) Expensive for very large datasets –Computing eigenvectors is the bottleneck Noisy datasets cause problems –“Informative” eigenvectors need not be in top few –Performance can drop suddenly from good to terrible

65 Experimental results: best-case assignment of class labels to clusters Adamic & Glance “Divided They Blog:…” 2004

66 Spectral Clustering: Graph = Matrix M*v 1 = v 2 “propogates weights from neighbors” H ABCDEFGHIJ A_111 B1_1 C11_ D_11 E1_1 F11_ G_11 H_11 I11_1 J111_ A B C F D E G I J A3 B2 C3 D E F G H I J M M v1v1 A5 B6 C5 D E F G H I J v2v2 *=

67 Repeated averaging with neighbors as a clustering method Pick a vector v 0 (maybe even at random) Compute v 1 = Wv 0 –i.e., replace v 0 [x] with weighted average of v 0 [y] for the neighbors y of x Plot v 1 [x] for each x Repeat for v 2, v 3, … What are the dynamics of this process?

68 Repeated averaging with neighbors on a sample problem… small larger

69 PIC: Power Iteration Clustering run power iteration (repeated averaging w/ neighbors) with early stopping Formally, can show this works when spectral techniques work –Experimentally, linear time –Easy to implement and efficient –Very easily parallelized –Experimentally, often better than traditional spectral methods Frank Lin

70 Experimental results: best-case assignment of class labels to clusters

71 Experiments: run time and scalability Time in millisec

72 Clustering results on Democratic-primary- related documents k-walks vs human annotators k-walks is early version of PIC cluster a graph with several types of nodes: blog entries; news stories; and dates. clusters (“communities”) of the graph correspond to events in the “saga”. Advantage: PIC clusters at interactive speeds.

73 Outline Tools for analysis of text –Probabilistic models for text, communities, and time Mixture models and LDA models for text LDA extensions to model hyperlink structure LDA extensions to model time –Alternative framework based on graph analysis to model time & community Preliminary results & tradeoffs Discussion of results & challenges

74 Goals of analysis: –Very diverse –Evaluation is difficult And requires revisiting often as goals evolve –Often “understanding” social text requires understanding a community Social Media Text Probabilistic models –can model many aspects of social text Community (links, comments) Time Evaluation: –introspective, qualitative on communities we understand Scientific communities –quantitative on predictive tasks Link prediction, user prediction, … –Against “gold-standard visualization” (sagas) Comments

75 Thanks to… NIH/NIGMS NSF Microsoft LiveLabs Microsoft Research Johnson & Johnson Language Technology Inst, CMU


Download ppt "Predictively Modeling Social Text William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer Science Carnegie Mellon."

Similar presentations


Ads by Google