Presentation is loading. Please wait.

Presentation is loading. Please wait.

Page 1 INARC Report Dan Roth, UIUC March 2011 Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov & Dan Roth Department of Computer.

Similar presentations


Presentation on theme: "Page 1 INARC Report Dan Roth, UIUC March 2011 Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov & Dan Roth Department of Computer."— Presentation transcript:

1 Page 1 INARC Report Dan Roth, UIUC March 2011 Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov & Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign

2 INARC Activities I: Dan Roth, UIUC I1.1: Fundamentals of Context-aware Real-time Data Fusion  Advances in Learning & Inference of Constrained Conditional Models CCM: A computational framework for learning and inference with interdependent variables in constrained settings.  Formulating Information Fusion as CCMs;  Preliminary theoretical and experimental work on Information Fusion  Key Publications: R. Samdani and D. Roth, Efficient Learning for Constrained Structured Prediction, submitted. G. Kundu, D. Roth and R. Samdani, Constrained Classification Models for Information Fusion, submitted. M. Chang, M. Connor and D. Roth, The Necessity of Combining Adaptation Methods, EMNLP’10. M. Chang, V. Srikumar, D. Goldwasser and D. Roth, Structured Output Learning with Indirect Supervision, ICML’10. M. Chang, D. Goldwasser, D. Roth and V. Srikumar, Discriminative Learning over Constrained Latent Representations, NAACL’10 2

3 3 I3.2: Modeling and Mining of Text-Rich Information Networks  Large heterogeneous information networks of structured and unstructured data.  State-of-the-art algorithmic tools for knowledge acquisition and information extraction, using content & structure of the network.  Make use of both explicit network structure and hidden `ontological’ structure (e.g., category structure).  Acquire and extract information from heterogeneous information networks when data is noisy, volatile, uncertain, and incomplete  Key Publications: Lev Ratinov, Doug Downey, Mike Anderson, Dan Roth, Local and Global Algorithms for Disambiguation to Wikipedia, ACL’11, Q. Do and D. Roth, Constraints based Taxonomic Relation Classification, EMNLP’10 Y. Chan and D. Roth, Exploiting Background Knowledge for Relation Extraction, COLING’10 Y. Chan and D. Roth, Exploiting Syntactico-Semantic Structures for Relation Extraction, ACL’11 J. Pasternack and D. Roth, Knowing What to Believe (when you already know something), COLING’10, J. Pasternack and Dan Roth, Generalized Fact-Finding, WWW’10. J. Pasternack and Dan Roth, Comprehensive Trust Metrics for Information Networks, Army Science Conference‘10 INARC activities II: Dan Roth, UIUC

4 Page 4 INARC Report Dan Roth, UIUC March 2011 Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov & Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign

5 Information overload 5

6 Organizing knowledge 6 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Chicago was used by default for Mac menus through MacOS 7.6, and OS 8 was released mid-1997.. Chicago VIII was one of the early 70s-era Chicago albums to catch my ear, along with Chicago II.

7 Cross-document co-reference resolution 7 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Chicago was used by default for Mac menus through MacOS 7.6, and OS 8 was released mid-1997.. Chicago VIII was one of the early 70s-era Chicago albums to catch my ear, along with Chicago II.

8 Reference resolution: (disambiguation to Wikipedia) 8 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Chicago was used by default for Mac menus through MacOS 7.6, and OS 8 was released mid-1997.. Chicago VIII was one of the early 70s-era Chicago albums to catch my ear, along with Chicago II.

9 The “reference” collection has structure 9 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Chicago was used by default for Mac menus through MacOS 7.6, and OS 8 was released mid-1997.. Chicago VIII was one of the early 70s-era Chicago albums to catch my ear, along with Chicago II. Used_In Is_a Succeeded Released

10 Analysis of Information Networks 10 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Chicago was used by default for Mac menus through MacOS 7.6, and OS 8 was released mid-1997.. Chicago VIII was one of the early 70s-era Chicago albums to catch my ear, along with Chicago II.

11 Here – Wikipedia as a knowledge resource …. but we can use other resources 11 Used_In Is_a Succeeded Released

12 Talk outline High-level algorithmic approach.  Bi-partite graph matching with global and local inference. Local Inference.  Experiments & Results Global Inference.  Experiments & Results Results, Conclusions Demo 12

13 Problem formulation - matching/ranking problem 13 Text Document(s)—News, Blogs,… Wikipedia Articles

14 Local approach 14  Γ is a solution to the problem  A set of pairs (m,t)  m: a mention in the document  t: the matched Wikipedia title Text Document(s)—News, Blogs,… Wikipedia Articles

15 Local approach 15  Γ is a solution to the problem  A set of pairs (m,t)  m: a mention in the document  t: the matched Wikipedia title Local score of matching the mention to the title Text Document(s)—News, Blogs,… Wikipedia Articles

16 Local + Global : using the Wikipedia structure 16 A “global” term – evaluating how good the structure of the solution is Text Document(s)—News, Blogs,… Wikipedia Articles

17 Can be reduced to an NP-hard problem 17 Text Document(s)—News, Blogs,… Wikipedia Articles

18 A tractable variation 18 1.Invent a surrogate solution Γ’; disambiguate each mention independently. 2.Evaluate the structure based on pair- wise coherence scores Ψ(t i,t j ) Text Document(s)—News, Blogs,… Wikipedia Articles

19 Talk outline High-level algorithmic approach.  Bi-partite graph matching with global and local inference. Local Inference.  Experiments & Results Global Inference.  Experiments & Results Results, Conclusions Demo 19

20 I. Baseline : P(Title|Surface Form) 20 P(Title|”Chicago”)

21 II. Context(Title) 21 Context(Charcoal)+= “a font called __ is used to”

22 III. Text(Title) 22 Just the text of the page (one per title)

23 Putting it all together City Vs Font: (0.99-0.0001, 0.01-0.2, 0.03-0.01) Band Vs Font: (0.001-0.0001, 0.001-0.2, 0.02-0.01) Training a ranking SVM:  Consider all title pairs.  Train a ranker on the pairs (learn to prefer the correct solution).  Inference = knockout tournament.  Key: Abstracts over the text – learns which scores are important. 23 Score Baseline Score Context Score Text Chicago_city0.990.010.03 Chicago_font0.00010.20.01 Chicago_band0.001 0.02

24 Example: font or city? 24 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font)

25 Lexical matching 25 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font) Cosine similarity, TF-IDF weighting

26 Ranking – font vs. city 26 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font) 0.5 0.2 0.1 0.8 0.3 0.2 0.3 0.5

27 Train a ranking SVM 27 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font) (0.5, 0.2, 0.1, 0.8) (0.3, 0.2, 0.3, 0.5) [(0.2, 0, -0.2, 0.3), -1]

28 Scaling issues – one of our key contributions 28 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font)

29 Scaling issues 29 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font) This stuff is big, and is loaded into the memory from the disk

30 Improving performance 30 It’s a version of Chicago – the standard classic Macintosh menu font, with that distinctive thick diagonal in the ”N”. Text(Chicago_city), Context(Chicago_city) Text(Chicago_font), Context(Chicago_font) Rather than computing TF- IDF weighted cosine similarity, we want to train a classifier on the fly. But due to the aggressive feature pruning, we choose PrTFIDF

31 Performance (local only): ranking accuracy 31 DatasetBaseline (solvable) +Local TFIDF (solvable) +Local PrTFIDF (solvable) ACE94.0595.6796.21 MSN News81.9184.0485.10 AQUAINT93.1994.3895.57 Wikipedia Test85.8892.7693.59

32 Talk outline High-level algorithmic approach.  Bi-partite graph matching with global and local inference. Local Inference.  Experiments & Results Global Inference.  Experiments & Results Results, Conclusions Demo 32

33 Co-occurrence(Title 1,Title 2 ) 33 The city senses of Boston and Chicago appear together often.

34 Co-occurrence(Title 1,Title 2 ) 34 Rock music and albums appear together often

35 Global ranking How to approximate the “global semantic context” in the document”? (What is Γ’?)  Use only non-ambiguous mentions for Γ’  Use the top baseline disambiguation for NER surface forms.  Use the top baseline disambiguation for all the surface forms. How to define relatedness between two titles? (What is Ψ?) 35

36 Ψ : Pair-wise relatedness between 2 titles: Normalized Google Distance Pointwise Mutual Information 36

37 What is best the Γ’? (ranker accuracy, solvable mentions) 37 DatasetBaselineBaseline+ Lexical Baseline+ Global Unambiguous Baseline+ Global NER Baseline+ Global, All Mentions ACE94.0594.5696.2196.75 MSN News81.9184.4684.0488.51 AQUAINT93.1995.4094.0495.91 Wikipedia Test85.8889.6789.5989.79

38 Results – ranker accuracy (solvable mentions) 38 DatasetBaselineBaseline+ Lexical Baseline+ Global Unambiguous Baseline+ Global NER Baseline+ Global, All Mentions ACE94.0596.2196.75 MSN News81.9185.1088.51 AQUAINT93.1995.5795.91 Wikipedia Test85.8893.5989.79

39 Results: Local + Global 39 DatasetBaselineBaseline+ Lexical Baseline+ Lexical+ Global ACE94.0596.2197.83 MSN News81.9185.1087.02 AQUAINT93.1995.5794.38 Wikipedia Test85.8893.5994.18

40 Talk outline High-level algorithmic approach.  Bi-partite graph matching with global and local inference. Local Inference.  Experiments & Results Global Inference.  Experiments & Results Results, Conclusions Demo 40

41 Conclusions: Dealing with a very large scale knowledge acquisition and extraction problem State-of-the-art algorithmic tools that exploit using content & structure of the network.  Formulated a framework for Local & Global reference resolution and disambiguation into knowledge networks  Proposed local and global algorithms: state of the art performance.  Addressed scaling issue: a major issue.  Identified key remaining challenges (next slide). 41

42 Future: We want to know what we don’t know Not dealt well in the literature  “As Peter Thompson, a 16-year-old hunter, said..”  “Dorothy Byrne, a state coordinator for the Florida Green Party…” We train a separate SVM classifier to identify such cases. The features are:  All the baseline, lexical and semantic scores of the top candidate.  Score assigned to the top candidate by the ranker.  The “confidence” of the ranker on the top candidate with respect to second-best disambiguation.  Good-Turing probability of out-of-Wikipedia occurrence for the mention. Limited success; future research. 42

43 Comparison to the previous state of the art (all mentions, including OOW) 43 DatasetBaselineMilne&WittenOur System- GLOW ACE69.5272.7677.25 MSN News72.8368.4974.88 AQUAINT82.6483.6183.94 Wikipedia Test81.7780.3290.54

44 Demo 44


Download ppt "Page 1 INARC Report Dan Roth, UIUC March 2011 Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov & Dan Roth Department of Computer."

Similar presentations


Ads by Google