Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graph-Based Methods for “Open Domain” Information Extraction William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer.

Similar presentations


Presentation on theme: "Graph-Based Methods for “Open Domain” Information Extraction William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer."— Presentation transcript:

1 Graph-Based Methods for “Open Domain” Information Extraction William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer Science Carnegie Mellon University

2 Traditional IE vs Open Domain IE Goal: recognize people, places, companies, times, dates, … in NL text. Supervised learning from corpus completely annotated with target entity class (e.g. “people”) Linear-chain CRFs Language- and genre-specific extractors Goal: recognize arbitrary entity sets in text –Minimal info about entity class –Example 1: “ICML, NIPS” –Example 2: “Machine learning conferences” Semi-supervised learning from very large corpora (WWW) Graph-based learning methods Techniques are largely language-independent (!) –Graph abstraction fits many languages

3 Examples with three seeds

4 Outline History –Open-domain IE by pattern-matching The bootstrapping-with-noise problem –Bootstrapping as a graph walk Open-domain IE as finding nodes “near” seeds on a graph –Approach 1: A “natural” graph derived from a smaller corpus + learned similarity –Approach 2: A carefully-engineered graph derived from huge corpus (e.g’s above)

5 History: Open-domain IE by pattern- matching (Hearst, 92) Start with seeds: “NIPS”, “ICML” Look thru a corpus for certain patterns: … “at NIPS, AISTATS, KDD and other learning conferences…” Expand from seeds to new instances Repeat….until ___ –“on PC of KDD, SIGIR, … and…”

6 Bootstrapping as graph proximity “…at NIPS, AISTATS, KDD and other learning conferences…” … “on PC of KDD, SIGIR, … and…” NIPS AISTATS KDD For skiiers, NIPS, SNOWBIRD,… and…” SNOWBIRD SIGIR “… AISTATS,KDD,…” shorter paths ~ earlier iterations many paths ~ additional evidence

7 Outline Open-domain IE as finding nodes “near” seeds on a graph –Approach 1: A “natural” graph derived from a smaller corpus + learned similarity –Approach 2: A carefully-engineered graph derived from huge corpus (above) “with” Richard Wang (CMU  ?) “with” Einat Minkov (CMU  Nokia)

8 Learning Similarity Measures for Parsed Text (Minkov & Cohen, EMNLP 2008) boyslikeplayingcars nsubjpartmodprep.with allkinds detprep.of NN VB DT NN Dependency parsed sentence is a naturally represented as a tree

9 Learning Similarity Measures for Parsed Text (Minkov & Cohen, EMNLP 2008) Dependency parsed corpus is “naturally” represented as a graph

10 Learning Similarity Measures for Parsed Text (Minkov & Cohen, EMNLP 2008) Open IE Goal: Find “coordinate terms” (eg, girl/boy, dolls/cars) in the graph, or find Similarity measure S so S(girl,boy) is high What about off-the-shelf similarity measures: Random Walk with Restart (RWR) Hitting time Commute time … ?

11 Personalized PR/RWR graph walk parameters: edge weights Θ, walk length K and reset probability γ. M[x,y] = Prob. of reaching y from x in one step: the edge weight from x to y, out of the outgoing weight from x. `Personalized PageRank’: reset probability biased towards initial distribution. The graph Nodes Node type Edge label Edge weight Returns a list of nodes (of type ) ranked by the graph walk probs. A query language: Q: {, } Approximate with power iteration, cut off after fixed number of iterations K.

12 girlsgirls 1 like 1 likelike 2 boys 2 boys mentionnsubjmention -1 mentionnsubj -1 mention -1

13 girlsgirls 1 like 1 likelike 2 boys 2 boys mentionnsubjmention -1 mentionnsubj -1 mention -1 girlsgirls 1 like 1 playing 1 playing…boys mentionnsubjpartmodmention -1 mentionmention -1

14 girlsgirls 1 like 1 playing 1 dolls 1 dolls mentionnsubjmention -1 Prep.withmention -1 Useful but not our goal here…

15 Learning a better similarity metric Query a  node rank 1  node rank 2  node rank 3  node rank 4  …  node rank 10  node rank 11  node rank 12  …  node rank 50 Query b Query q  node rank 1  node rank 2  node rank 3  node rank 4  …  node rank 10  node rank 11  node rank 12  …  node rank 50  node rank 1  node rank 2  node rank 3  node rank 4  …  node rank 10  node rank 11  node rank 12  …  node rank 50 … GRAPH WALK + Rel. answers a+ Rel. answers b+ Rel. answers q Task T (query class) Seed words (“girl”, “boy”, …) Potential new instances of the target concept (“doll”, “child”, “toddler”, …)

16 Learning methods Weight tuning – weights learned per edge type [Diligenti et-al, 2005] Reranking – re-order the retrieved list using global features of all paths from source to destination [Minkov et-al, 2006] FEATURES  Edge label sequences  Lexical unigrams  … boysdolls nsubj .nsubj-inv nsubj  partmod  partmod-inv  nsubj-inv nsubj  partmod  prep.in “like”, “playing”

17 Learning methods: Path-Constrained Graph Walk PCW (summary): for each node x, learn  P(x  z : relevant(z) | history(Vq,x) )  History(Vq,x) = seq of edge labels leading from Vq to x, with all histories stored in a tree boysdolls nsubj .nsubj-inv nsubj  partmod  partmod-inv  nsubj-inv nsubj  partmod  prep.in boys dolls Vq “girls” nsubj nsubj-inv partmod partmod-inv nsubj-inv boys prep.in x1 x2 x3

18 City and person name extraction City names: Vq = {sydney, stamford, greenville, los_angeles} Person names: Vq = {carter, dave_kingman, pedro_ramos, florio} wordsnodesedgesNEs MUC 140K82K244K3K (true) MUC+AP 2,440K1,030K3,550K36K (auto) –10 (X4) queries for each task Train queries q1-q5 / test queries q6-q10 –Extract nodes of type NE. –GW: 6 steps, uniform/learned weights –Reranking: top 200 nodes (using learned weights) –Path trees: 20 correct / 20 incorrect; threshold 0.5 Complete Partial/Noisy Labeling

19 City namesPerson names MUC precision rank

20 City namesPerson names conj-and, prep-in, nn, appos …subj, obj, poss, nn … MUC precision rank

21 City namesPerson names conj-and, prep-in, nn, appos …subj, obj, poss, nn … prep-in-inv  conj-and nn-inv  nn nsubj  nsubj-inv appos  nn-inv MUC precision rank

22 City namesPerson names conj-and, prep-in, nn, appos …subj, obj, poss, nn … Prep-in-inv  conj-and nn-inv  nn LEX.”based”, LEX.”downtown”LEX.”mr”, LEX.”president” MUC precision rank nsubj  nsubj-inv appos  nn-inv

23 Vector-space models Co-occurrence vectors (counts; window: +/- 2) Dependency vectors [Pad ó & Lapata, Comp Ling 07] –A path value function: Length-based value: 1 / length(path) Relation based value: subj-5, obj-4, obl-3, gen-2, else-1 –Context selection function: Minimal: verbal predicate-argument (length 1) Medium: coordination, genitive construction, noun compounds (<=3) Maximal: combinations of the above (<=4) –Similarity function: Cosine Lin  Only score the top nodes retrieved with reranking (~1000 overall)

24 GWs – Vector models MUC City names Person names precision rank  The graph-based methods are best (syntactic + learning)

25 GWs – Vector models MUC + AP City names Person names precision rank  The advantage of the graph based models diminishes with the amount of data.  This is hard to evaluate at high ranks

26 Outline Open-domain IE as finding nodes “near” seeds on a graph –Approach 1: A “natural” graph derived from a smaller corpus + learned similarity –Approach 2: A carefully-engineered graph derived from huge corpus “with” Richard Wang (CMU  ?) “with” Einat Minkov (CMU  Nokia)

27 Set Expansion for Any Language (SEAL) – (Wang & Cohen, ICDM 07) Basic ideas –Dynamically build the graph using queries to the web –Constrain the graph to be as useful as possible Be smart about queries Be smart about “patterns”: use clever methods for finding meaningful structure on web pages

28 System Architecture Fetcher: download web pages from the Web that contain all the seeds Extractor: learn wrappers from web pages Ranker: rank entities extracted by wrappers 1.Canon 2.Nikon 3.Olympus 4.Pentax 5.Sony 6.Kodak 7.Minolta 8.Panasonic 9.Casio 10.Leica 11.Fuji 12.Samsung 13.…

29 The Extractor Learn wrappers from web documents and seeds on the fly –Utilize semi-structured documents –Wrappers defined at character level Very fast No tokenization required; thus language independent Wrappers derived from doc d applied to d only –See ICDM 2007 paper for details

30 I am noise Me too!

31 The Ranker Rank candidate entity mentions based on “ similarity ” to seeds –Noisy mentions should be ranked lower Random Walk with Restart (GW) As before … What ’ s the graph?

32 Building a Graph A graph consists of a fixed set of … –Node Types: {seeds, document, wrapper, mention} –Labeled Directed Edges: {find, derive, extract} Each edge asserts that a binary relation r holds Each edge has an inverse relation r -1 (graph is cyclic) –Intuition: good extractions are extracted by many good wrappers, and good wrappers extract many good extractions, “ford”, “nissan”, “toyota” curryauto.com Wrapper #3 Wrapper #2 Wrapper #1 Wrapper #4 “honda” 26.1% “acura” 34.6% “chevrolet” 22.5% “bmw pittsburgh” 8.4% “volvo chicago” 8.4% find derive extract northpointcars.com

33 Evaluation Datasets: closed sets

34 Evaluation Method Mean Average Precision –Commonly used for evaluating ranked lists in IR –Contains recall and precision-oriented aspects –Sensitive to the entire ranking –Mean of average precisions for each ranked list Evaluation Procedure (per dataset) 1.Randomly select three true entities and use their first listed mentions as seeds 2.Expand the three seeds obtained from step 1 3.Repeat steps 1 and 2 five times 4.Compute MAP for the five ranked lists where L = ranked list of extracted mentions, r = rank Prec ( r ) = precision at rank r (a) Extracted mention at r matches any true mention (b) There exist no other extracted mention at rank less than r that is of the same entity as the one at r # True Entities = total number of true entities in this dataset

35 Experimental Results: 3 seeds Vary: [Extractor] + [Ranker] + [Top N URLs] Extractor: E1: Baseline Extractor (longest common context for all seed occurrences) E2: Smarter Extractor (longest common context for 1 occurrence of each seed) Ranker: { EF: Baseline (Most Frequent), GW: Graph Walk } N URLs: { 100, 200, 300 }

36 Side by side comparisons Telukdar, Brants, Liberman, Pereira, CoNLL 06

37 Side by side comparisons Ghahramani & Heller, NIPS 2005 EachMovie vs WWWNIPS vs WWW

38 A limitation of the original SEAL

39 Proposed Solution: Iterative SEAL (iSEAL) (Wang & Cohen, ICDM 2008) Makes several calls to SEAL, each call … –Expands a couple of seeds –Aggregates statistics Evaluate iSEAL using … –Two iterative processes Supervised vs. Unsupervised (Bootstrapping) –Two seeding strategies Fixed Seed Size vs. Increasing Seed Size –Five ranking methods

40 ISeal (Fixed Seed Size, Supervised) Initial Seeds Finally rank nodes by proximity to seeds in the full graph Refinement (ISS): Increase size of seed set for each expansion over time: 2,3,4,4,… Variant (Bootstrap): use high- confidence extractions when seeds run out

41 Ranking Methods Random Graph Walk with Restart –H. Tong, C. Faloutsos, and J.-Y. Pan. Fast random walk with restart and its application. In ICDM, 2006. PageRank –L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web. 1998. Bayesian Sets (over flattened graph) –Z. Ghahramani and K. A. Heller. Bayesian sets. In NIPS, 2005. Wrapper Length –Weights each item based on the length of common contextual string of that item and the seeds Wrapper Frequency –Weights each item based on the number of wrappers that extract the item

42

43

44

45

46 Little difference between ranking methods for supervised case (all seeds correct); large differences when bootstrapping Increasing seed size {2,3,4,4,…} makes all ranking methods improve steadily in bootstrapping case

47

48 Current work Start with name of concept (e.g., “NFL teams”) Look for (language-dependent) patterns: –“… for successful NFL teams (e.g., Pittsburgh Steelers, New York Giants, …)” Take most frequent answers as seeds Run bootstrapping iSEAL with seed sizes 2,3,4,4….

49 Datasets with concept names

50 Experimental results Direct use of text patterns

51 Summary/Conclusions Open-domain IE as finding nodes “near” seeds on a graph “…at NIPS, AISTATS, KDD and other learning conferences…” … “on PC of KDD, SIGIR, … and…” NIPS AISTATS KDD For skiiers, NIPS, SNOWBIRD,… and…” SNOWBIRD SIGIR “… AISTATS,KDD,…” shorter paths ~ earlier iterations many paths ~ additional evidence

52 Summary/Conclusions Open-domain IE as finding nodes “near” seeds on a graph, approach 1: –Minkov & Cohen, EMNLP 08: –Graph ~ dependency-parsed corpus –Off-the-shelf distance metrics not great –With learning: Results significantly better than state-of-the-art on small corpora (e.g. a personal email corpus) Results competitive on 2M+ word corpora

53 Summary/Conclusions Open-domain IE as finding nodes “near” seeds on a graph, approach 2: –Wang & Cohen, ICDM 07, 08: –Graph built on-the-fly with web queries A good graph matters! –Off-the-shelf distance metrics work Differences are minimal for clean seeds Modest improvements from learning w/ clean seeds –E.g., reranking (not described here) Bigger differences in similarity measures with noisy seeds

54 Thanks to DARPA PAL program –Minkov, Cohen, Wang Yahoo! Research Labs –Minkov Google Research Grant program –Wang The organizers for inviting me! Sponsored links: http://boowa.com (Richard’s demo)


Download ppt "Graph-Based Methods for “Open Domain” Information Extraction William W. Cohen Machine Learning Dept. and Language Technologies Institute School of Computer."

Similar presentations


Ads by Google