Download presentation
Presentation is loading. Please wait.
Published byJacob Fisher Modified over 9 years ago
1
Improving Morphosyntactic Tagging of Slovene by Tagger Combination Jan Rupnik Miha Grčar Tomaž Erjavec Jožef Stefan Institute
2
Outline Introduction Motivation Tagger combination Experiments
3
POS tagging Veža je smrdela po kuhanem zelju in starih, cunjastih predpražnikih. N V V S A N C A A N Part Of Speech (POS) tagging: assigning morphosyntactic categories to words
4
Slovenian POS multilingual MULTEXT-East specification almost 2,000 tags (morphosyntactic descriptions, MSDs) for Slovene Tags: positionally coded attributes Example: MSD Agufpa –Category = Adjective –Type = general –Degree = undefined –Gender = feminine –Number = plural –Case = accusative
5
State of the art: Two taggers Amebis d.o.o. proprietary tagger –Based on handcrafted rules TnT tagger –Based on statistical modelling of sentences and their POS tags. –Hidden Markov Model tri-gram tagger –Trained on a large corpus of anottated sentences
6
Statistics: motivation Different tagging outcomes of the two taggers on the JOS corpus of 100k words Green: proportion of words where both taggers were correct Yellow: Both predicted the same, incorrect tag Blue: Both predicted incorrect but different tags Cyan: Amebis correct, TnT incorrect Purple: TnT correct, Amebis incorrect
7
Example TrueTnTAmebis PreiskaveNcfpn Ncfsg medSiNcmsanSi sodnimAgumsiAgumpdAgumsi postopkomNcmsiNcmpdNcmsi soVa-r3p-n pokazaleVmep-pf
8
Veža je smrdela po kuhanem zelju in starih, cunjastih predpražnikih. N V V S A N C A A N TnT Amebis Tagger Meta Tagger Tag TnT Tag Amb Combining the taggers Text flow Tag
9
Combining the taggers TnT Amebis Tagger … prepričati italijanske pravosodne oblasti... Agufpn Feature vector Meta Tagger A binary classifier; the two classes are TnT and Amebis Agufpa
10
Feature vector construction TnT Amebis Tagger … prepričati italijanske pravosodne oblasti... TnT features POS T =Adjective, Type T =general, Gender T =feminine, Number T =plural, Case T =nominative, Animacy T =n/a, Aspect T =n/a, Form T =n/a, Person T =n/a, Negative T =n/a, Degree T =undefined, Definiteness T =n/a, Participle T =n/a, Owner_Number T =n/a, Owner_Gender T =n/a Amebis features POS A =Adjective, Type A =general, Gender A =feminine, Number A =plural, Case A =accusative, Animacy A =n/a, Aspect A =n/a, Form A =n/a, Person A =n/a, Negative A =n/a, Degree A =undefined, Definiteness A =n/a, Participle A =n/a, Owner_Number A =n/a, Owner_Gender A =n/a Agreement features POS A=T =yes, Type A=T =yes, …, Number A=T =yes, Case A=T =no, Animacy A=T =yes, …, Owner_Gender A=T =yes This is the correct tag label: Amebis AgufpnAgufpa
11
Context … italijanske pravosodne oblasti … TnT features (pravosodne) Amebis features (pravosodne) Agreement features (pravosodne) … italijanske pravosodne oblasti … TnT features (pravosodne) Amebis features (pravosodne) Agreement features (pravosodne) TnT features (oblasti) Amebis features (oblasti) TnT features (italijanske) Amebis features (italijanske) (a) No context (b) Context
12
Classifiers Naive Bayes –Probabilistic classifier –Assumes strong independence of features –Black-box classifier CN2 Rules –If-then rule induction –Covering algorithm –Interpretable model as well as its decisions C4.5 Decision Tree –Based on information entropy –Splitting algorithm –Interpretable model as well as its decisions
13
Experiments: Dataset JOS corpus - approximately 250 texts (100k words, 120k if we include punctuation) Sampled from a larger corpus FidaPLUS TnT trained with 10 fold cross validation, each time training on 9 folds and tagging the remaining fold (for the meta-tagger experiments)
14
Experiments Baseline 1: –majority classifier (always predict what TnT predicts) –Accuracy: 53% Baseline 2 –Naive Bayes –One feature only: Amebis full MSD –Accuracy: 71%
15
Baseline 2 Naive Bayes classifier with one feature (Amebis full MSD) is simplified to counting the occurrences of two events for every MSD f: –#cases where Amebis predicted the tag f and was correct: n c f –#cases where Amebis predicted the tag f and was incorrect n w f –NB gives us the following rule, given a pair of predictions MSDa and MSDt: if n c MSDa < n w MSDa predict MSDt, else predict MSDa.
16
Experiments: Different classifiers and feature sets Classifiers: NB, CN, C4.5 Feature sets: –Full MSD –Decomposed MSD, agreement features –Basic features subset of the decomposed MSD features set (Category, Type, Number, Gender, Case) –Union of all features considered (full + decompositions) Scenarios: –no context –Context, ignore punctuation –Context, punctuation
17
Results Context helps Punctuation slightly improves classification C4.5 with basic features works best Feature set / Classifier FULL TAGDECBASIC FULL+DEC NB73.9067.5567.5069.65 C4.573.5174.7074.2373.59 CN260.6172.5771.6870.90 Feature set / Classifier FULL TAGDECBASIC FULL+DEC NB73.1068.2967.9670.55 C4.573.1078.5179.2376.72 CN262.1673.2672.7572.29 Feature set / Classifier FULL TAGDECBASIC FULL+DEC NB73.4468.3268.1470.53 C4.574.1878.9179.7377.68 CN262.2374.2772.8273.01 No context Context without punctuationContext with punctuation
18
Overall error rate
19
Thank you!
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.