Presentation is loading. Please wait.

Presentation is loading. Please wait.

CLEF 2008 - Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen.

Similar presentations


Presentation on theme: "CLEF 2008 - Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen."— Presentation transcript:

1 CLEF 2008 - Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen UH: Thomas Mandl

2 CLEF 2008 -Ǻrhus2 Introduction Robust: emphasize difficult topics using non-linear combination of topic results (GMAP) This year also automatic word sense annotation: English documents and topics (English WordNet) Spanish topics (Spanish WordNet - closely linked to the English WordNet) Participants explore how the word senses (plus the semantic information in wordnets) can be used in IR and CLIR See also QA-WSD exercise, which uses same set of documents

3 CLEF 2008 -Ǻrhus3 Documents News collection: LA Times 94, Glasgow Herald 95 Sense information added to all content words Lemma Part of speech Weight of each sense in WordNet 1.6 XML with DTD provided Two leading WSD systems: National University of Singapore University of the Basque Country Significant effort (100Mword corpus) Special thanks to Hwee Tou Ng and colleagues from NUS and Oier Lopez de Lacalle from UBC

4 CLEF 2008 -Ǻrhus4 Documents: example XML

5 CLEF 2008 -Ǻrhus5 Topics We used existing CLEF topics in English and Spanish: 2001; 41-90; LA 94 2002; 91-140; LA 94 2004; 201-250; GH 95 2003; 141-200; LA 94, GH 95 2005; 251-300; LA 94, GH 95 2006; 301-350; LA 94, GH 95 First three as training (plus relevance judg.) Last three for testing

6 CLEF 2008 -Ǻrhus6 Topics: WSD English topics were disambiguated by both NUS and UBC systems Spanish topics: no large-scale WSD system available, so we used the first-sense heuristic Word sense codes are shared between Spanish and English wordnets Sense information added to all content words Lemma Part of speech Weight of each sense in WordNet 1.6 XML with DTD provided

7 CLEF 2008 -Ǻrhus7 Topics: WSD example

8 CLEF 2008 -Ǻrhus8 Evaluation Reused relevance assessments from previous years Relevance assessment for training topics were provided alongside the training topics MAP and GMAP Participants had to send at least one run which did not use WSD and one run which used WSD

9 CLEF 2008 -Ǻrhus9 Participation 8 official participants, plus two late ones:  Martínez et al. (Univ. of Jaen)  Navarro et al. (Univ. of Alicante) 45 monolingual runs 18 bilingual runs

10 CLEF 2008 -Ǻrhus10 Monolingual results MAP: non-WSD best, 3 participants improve it GMAP: WSD best, 3 participants improve it

11 CLEF 2008 -Ǻrhus11 Monolingual: using WSD UNINE: synset indexes, combine with results from other indexes Improvement in GMAP UCM: query expansion using structured queries Improvement in MAP and GMAP IXA: expand to all synonyms of all senses in topics, best sense in documents Improvement in MAP GENEVA: synset indexes, expanding to synonyms and hypernyms No improvement, except for some topics UFRGS: only use lemmas (plus multiwords) Improvement in MAP and GMAP

12 CLEF 2008 -Ǻrhus12 Monolingual: using WSD UNIBA: combine synset indexes (best sense) Improvements in MAP Univ. of Alicante: expand to all synonyms of best sense Improvement on train / decrease on test Univ. of Jaen: combine synset indexes (best sense) No improvement, except for some topics

13 CLEF 2008 -Ǻrhus13 Bilingual results MAP and GMAP: best results for non-WSD Only IXA and UNIBA improve using WSD, but very low GMAP.

14 CLEF 2008 -Ǻrhus14 Bilingual: using WSD IXA: wordnets as the sole sources for translation Improvement in MAP UNIGE: translation of topic for baseline No improvement UFRGS: association rules from parallel corpora, plus use of lemmas (no WSD) No improvement UNIBA: wordnets as the sole sources for translation Improvement in both MAP and GMAP

15 CLEF 2008 -Ǻrhus15 Conclusions and future Novel dataset with WSD of documents Successful participation 8+2 participants Some positive results with top scoring systems Room for improvement and for new techniques Analysis Correlation with polysemy and difficult topics underway Manual analysis of topics which get improvement with WSD New proposal for 2009

16 CLEF 2008 - Ǻrhus Robust – Word Sense Disambiguation exercise Thank you!

17 CLEF 2008 -Ǻrhus17

18 CLEF 2008 -Ǻrhus18 Word senses can help CLIR We will provide state-of-the-art WSD tags For the first time we offer sense-disambiguated collection All senses with confidence scores (error propag.) The participant can choose how to use it (e.g. nouns only) Also provide synonyms/translations for senses The disambiguated collection allows for: Expanding the collection to synonyms and broader terms Translation to all languages that have a wordnet Focused expansion/translation of collection Higher recall Sense-based blind relevance feedback There is more information in the documents

19 CLEF 2008 -Ǻrhus19 CLIR WSD exercise Add the WSD tagged collection/topics as an additional “language” in the ad-hoc task Same topics Same document collection Just offer an additional resource An additional run: With and without WSD Tasks: X2ENG and ENG2ENG (control) Extra resources needed: Relevance assessment of the additional runs

20 CLEF 2008 -Ǻrhus20 Usefulness of WSD on IR/CLIR disputed, but … Real compared to artificial experiments Expansion compared to just WSD Weighted list of senses compared to best sense Controlling which word to disambiguate WSD technology has improved Coarser-grained senses (90% acc. on Semeval 2007)

21 CLEF 2008 -Ǻrhus21 QA WSD pilot exercise Add the WSD tagged collection/queries to the multilingual Q/A task Same topics LA94 GH95 (Not wikipedia) In addition to the word senses we provide: Synonyms / translations for those senses Need to send one run to the multilingual Q/A task 2 runs, with and without WSD Tasks: X2ENG and ENG2ENG (for QA WSD participants only) Extra resources needed: Relevance assessment of the additional runs

22 CLEF 2008 -Ǻrhus22 QA WSD pilot exercise Details: Wikipedia won’t be disambiguated Only a subset of the main QA will be comparable In main QA, multiple answers are required In addition, to normal evaluation, evaluate first reply not coming from wikipedia

23 CLEF 2008 -Ǻrhus23 WSD 4 AVE In addition to the word senses provide: Synonyms / translations for those senses Need to send two runs (one more than other part.): With and without WSD Tasks: X2ENG and ENG2ENG (control) Additional resources: Provide word sense tags to the snippets returned by QA results (automatic mapping to original doc. Collection)


Download ppt "CLEF 2008 - Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen."

Similar presentations


Ads by Google