Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Impact of Task and Corpus on Event Extraction Systems Ralph Grishman New York University Malta, May 2010 NYU.

Similar presentations


Presentation on theme: "The Impact of Task and Corpus on Event Extraction Systems Ralph Grishman New York University Malta, May 2010 NYU."— Presentation transcript:

1 The Impact of Task and Corpus on Event Extraction Systems Ralph Grishman New York University Malta, May 2010 NYU

2 2 Event Extraction (“EE”) EE systems extract from text all instances of a given type of event, along with the event’s participants and modifiers. There’s been considerable research over the past decade on how to model such events, and how to learn such models But most advances are only tested on one or two types of events. We don’t always appreciate the degree to which particular approaches depend on the type of event and test corpus.

3 3 A Bit of EE History MUC scenario template 1987 – 1998 MUC-3/4: terrorist incidents MUC-6: executive succession Event 99 Move towards simpler templates ACE 2005 Inventory of 33 elementary news events Bio-molecular (Bio-creative, Bio-NLP)

4 Event models Largely based on local syntactic context –In simplest form, SVO patterns or comparable nominal patterns with semantic class constraints organization attacked location organization’s attack on location –Some gain from chain and tree patterns organization launched an attack on location –May implement as pattern matcher or as classifier using basically the same features 4

5 5 Impact we will explore this morning Breadth of task vs. Learning strategy Breadth of corpus vs. Event model

6 6 Breadth of Task EE fills an event template (with possible sub-templates) How wide a range of information is captured in this template? MUC-3/4: an attack and its effect on people and buildings ACE: attack and effects reported separately MUC-6: leaving job and starting new job reported together ACE: leaving job and starting job reported separately

7 Semi-supervised learning strategies Supervised EE training is very expensive … Lots of types of events Lots of paraphrases of each event Event annotation is slow (because information is complex) So semi-supervised methods are particularly attractive Start with seed set Grow incrementally (‘bootstrapping’) Stop the bootstrapping –by using annotated development sample or –by training multiple mutually exclusive events (counter-training) 7

8 8 Document-centric Event Discovery Premise: patterns which occur relatively more frequently in event- relevant documents (than in other documents) are event-relevant patterns [Riloff 1996] Procedure: [Yangarber 2000] Start with seed patterns Retrieve documents containing selected patterns Extract all patterns from retrieved documents Rank patterns by relative frequency Add top-ranked patterns to selected set Repeat

9 9 Successes and difficulties Document-centric strategy successful for MUC-3 and MUC-6 Captures related events But this strategy performs poorly for some ACE events –High degree of co-occurrence between selected event types 47% of documents reporting an attack also report a death –Natural scenarios of related (co-occurring) events Starting and leaving a job; crime and arrest; etc. –Semi-Supervised Learner quickly expands from seed events (representing a single event type) to related event types in the natural scenario

10 Alternatives to document-centric strategies WordNet-based strategy [Stevenson and Greenwood 2005] –Expand seed set by replacing words in patterns by most similar lexical items Based on WordNet synonyms & hypernyms Encounters problems with highly polysemous words Combined strategy [S Liao @ NYU 2010] –Document-based information reduces problems of polysemy 10

11 Event extraction performance (F measure) 11

12 Breadth of Corpora Are documents in test corpus primarily about events of interest, or are they an unselected, heterogenous corpus? Issues: EE corpora are expensive Typically EE test corpora are enriched to be sure they have enough relevant events –MUC-3 and MUC-6 … over 50% relevant documents –ACE newswire … an average of 3 attack events/document Makes evaluation somewhat unrealistic 12

13 Why does corpus breadth matter? Event detection a Word Sense Disambiguation (WSD) problem Fred attacked Mary [physically or verbally?] Fred left the Pentagon [retired or went on a trip?] –Local patterns not sufficient May be a minor problem in a selected corpus but a major one in a heterogenous corpus Attack event detector trained on ACE corpus tested on ACE newswire: recall 66% spurious event rate 8% tested on New York Times: recall 46% spurious event rate 111% 13

14 Handling heterogenous corpora Add a topic model to do WSD for event triggers –Document-level bag-of-words model predicting whether document contains an attack event –Combine with traditional local model –[similar to Patwardhan & Riloff 2009 relevant-region model] Attack event detector trained on ACE corpus, augmented with topic model tested on ACE newswire: recall 66% spurious event rate 7% tested on New York Times: recall 33% spurious event rate 24% 14

15 Conclusion: Implications for EE Evaluation Continued progress in EE will require Appreciating the range of EE tasks –And how the choice of task affects EE strategy And appreciating the influence of test corpora –Evaluating on larger, more heterogenous corpora –With more selective annotation 15

16 Thank you.


Download ppt "The Impact of Task and Corpus on Event Extraction Systems Ralph Grishman New York University Malta, May 2010 NYU."

Similar presentations


Ads by Google