Presentation is loading. Please wait.

Presentation is loading. Please wait.

Aiding WSD by exploiting hypo/hypernymy relations in a restricted framework MEANING project Experiment 6.H(d) Luis Villarejo and Lluís M à rquez.

Similar presentations


Presentation on theme: "Aiding WSD by exploiting hypo/hypernymy relations in a restricted framework MEANING project Experiment 6.H(d) Luis Villarejo and Lluís M à rquez."— Presentation transcript:

1 Aiding WSD by exploiting hypo/hypernymy relations in a restricted framework MEANING project Experiment 6.H(d) Luis Villarejo and Lluís M à rquez

2 Preface This document stands for the 1st draft of the description of Experiment 6.H(d): “Aiding WSD by exploiting hypo/hypernymy relations in a restricted framework”. To be included in the Working Paper describing experiment 6.H: Bootstrapping.

3 Outline Introduction Our approach vs Mihalcea’s Technical details Experiments Results Conclusions Future work

4 Introduction “The Role of Non-Ambiguous Words in Natural Language Disambiguation” Rada Mihalcea University of North Texas Task: Automatic resolution of ambiguity in NL. Problem: The lack of large amounts of annotated data. Proposal: Inducing knowledge from non-ambiguous words via equivalence classes to automatically build an annotated corpus.

5 Introduction In this experiment we explore whether training example sets can be enlarged with automatically extracted examples associated to each sense. Some work has been recently done in the direction of extracting examples, in a non/supervised manner, associated to word senses like the one presented by Mihalcea. R. in The Role of Non-Ambiguous Words in Natural Language Disambiguation where POS tagging, Named entity tagging and WSD were approached. We will only tackle here WSD. We not only did our best to reproduce the conditions, in which experiments were developed, described in the paper by Mihalcea. R. but also explored new possibilities not taken into account in the paper. However, our scenario is better since our unique source for obtaining the extra training examples is SemCor, and therefore, we do not need to perform any kind of disambiguation. The semantic relations, used to acquire the target words for the extra examples, were taken from the MCR.

6 “Equivalence classes consist of words semantically related.” Introduction Focuses on: Part of Speech Tagging Named Entity Tagging Word Sense Disambiguation WordNet 1.6 Synonyms ? Hyperonyms ? Hyponyms ? Holonyms ? Meronyms ? Target WordMeaningsMonosemous equivalents plant living_organismflora manufacturing_plantindustrial_plant What happens with the other two meanings of plant?? (actor and trick)

7 Manually selected word set: Our list: child, day, find, keep, live, material, play and serve. Rada’s: bass, crane, motion, palm, plant and tank. Only two, specially selected, and clearly differentiated senses per word: {play 00727813}: play games, play sports “Tom plays tennis” {play 01177316}: play a role or part “Gielgud played Hamlet” Source for examples on the equivalent words: Relation of equivalence between words: Our approach vs Mihalcea’s 5 verbs + 3 nouns6 ¿nouns? SemCorRaw corpus (monosemous words, no need for annotation) Synonymy, Hyperonymy, Hyponymy and mixes (levels 1, 2 and both) ¿Synonymy? Our approachMihalcea Equivalent corpus sizes.

8 Features used: Technique used to learn: Use of the examples coming from the equivalent words: Our approach vs Mihalcea’s Two left words, Two right words Two left POS, Two right POS One left word, One right word One left POS, One right POS Bag of words (WSD task in Meaning project) Two left words, Two right words Nouns before and after Verbs before and after Sense specific keywords?? Our approachRada’s SVMTimbl Memory Based Learner Added to the original word set examples and training a classifier on it Training a classifier on it

9 Technical details SVM_light (Joachims): Each word is a binary classification problem which has to decide between two possible labels (senses). Positive examples of one sense are negative for the other. 10-fold cross validation Testing with a random folder from the originals. Training with the rest of the originals plus the equivalents. C parameter tuning by margin (5 pieces and 2 rounds) for each classification problem. Linear kernel

10 Experiments Baselines: Examples from Original word set codified with all features. Examples from Original word set codified only with BOW feature. Experiments over each Baseline: Examples from Equivalents codified with all features. Examples from Equivalents codified only with BOW feature. Examples from Equivalents added in equal proportions. Relations explored over each experiment: Hyponymy levels 1, 2 and both Hyperonymy levels 1, 2 and both Synonymy Mixes: SHypo1, SHypo2, SHypo12, SHype1, SHype2, SHype12 Total: 8 words * 2 baselines * 3 experim * 13 relations = 624 results

11 Results – Originals BOW, Added BOW (I) MFSAccuracy Baseline 1 (Originals only BOW feature) 72.7477.78 In detail#origsMFSBaseline#added Best accuracy & relation Child20670.8674.276276.21 BagSHype1 Day19583.6185.64686.15 BagSinon Find2759.6770.37874.07 BagSHype2 Keep2456.6775.0061166.67 BagSHype1 Live8864.6475.00476.14 BagHypo2 Material10768.4172.9019678.50 BagHype12 Play5475.0579.63481.48 BagSinon Serve4674.0080.433491.30 BagHipo1

12 Results – Originals BOW, Added BOW (II) MFSAccuracy Baseline 1 (Originals only BOW feature) 72.7477.78 Best Global Results#original exs#exs addedAccuracy BagSHype1747154577.64 BagSinon74711977.38 BagSHypo174760877.14 BagHypo174748977.05

13 Results – Originals All Feats, Added Both (I) MFSAccuracy Baseline 2 (Originals All features) 72.7478.71 In detail#origsMFSBaseline#added Best accuracy & relation Child20670.8675.737979.61 BagSHypo1 Day19583.6185.6414588.21 Hypo1 Find2759.6774.07277.78 Hypo1 Keep2456.6775.0061179.17 BagSHype1 Live8864.6476.1447678.41 BagHype1 Material10768.4172.9020179.44 SHype12 Play5475.0577.781283.33 SHypo1 Serve4674.0086.963491.30 Hypo1

14 Results – Originals All Feats, Added Both (II) MFSAccuracy Baseline 2 (Originals All features) 72.7478.71 Best Global Results#original exs#exs addedAccuracy BagSHype1747154582.40 BagSHype12747873082.22 BagSinon74711982.21 SHype1747154581.68

15 Results – O-all, A-both, keeping proportions I MFSAccuracy Baseline 2 (Originals All features) 72.7478.71 In detail#origsMFSBaseline#added Best accuracy & relation Child20670.8675.735180.10 BagSHypo1 Day19583.6185.64486.67 Sinon Find2759.6774.07670.37 Hype1 Keep2456.6775.004487.50 SHype1 Live8864.6476.14076.14 ----------- Material10768.4172.905475.70 BagHype2 Play5475.0577.78681.48 BagSHypo1 Serve4674.0086.961491.30 Hypo1

16 Results – O-all, A-both, keeping proportions II MFSAccuracy Baseline 2 (Originals All features) 72.7478.71 Best Global Results#original exs#exs addedAccuracy BagSHypo174722679.84 SHype1274717979.65 SHype174711279.38 Hype274712179.38 Note: Examples not randomly added.

17 Results Other results: Accuracy improves (slightly) when choosing manually which equivalents to take into account. Experiments with a set of 41 words (nouns and verbs) with all senses per word (varying from 2 to 20) proved to have worse results (accuracies on all mixes of relations and features are below the baseline).

18 Conclusions The work presented by R. Mihalcea has some dark points. The criteria used to select the words, involved in the experiment, the criteria to select word senses, the restriction on the number of senses per word, the semantic relations used to get the monosemous equivalents or the features used to learn are not satisfactorily described. Although this, Mihalcea’s results showed that equivalents carry useful information to do WSD (better than MFS 76.60% against 70.61%). But is this information useful to improve the state-of-the-art in WSD?. Experiments carried out here showed that adding examples coming from the equivalents seems to improve the results in a restricted framework. This means using a small word set, only two senses per word and clearly differentiated senses. When we moved to an open framework, this means using a bigger word set is used (41 words), no special selection of words, no special selection of senses and no restriction on the number of senses per word (varying from 2 to 20), results proved to be worse. MFS Classifier trained on the automatically generated corpora Classifier trained on the manually generated corpora Accuracy70.6176.6083.35

19 Future Work Do the differences between the features set used by Mihalcea and the one we used critically affected the results on the 41 words experiment? Exploiting the feature extractor over SemCor to enrich the feature set used. Ideas are welcome. Study the correlation between the number of examples added and the accuracy obtained. Restrict the addition of examples coming from the equivalent words (second class examples). Randomly select which examples to add when keeping proportions or restricting the addition.


Download ppt "Aiding WSD by exploiting hypo/hypernymy relations in a restricted framework MEANING project Experiment 6.H(d) Luis Villarejo and Lluís M à rquez."

Similar presentations


Ads by Google