Presentation is loading. Please wait.

Presentation is loading. Please wait.

WP 2: Semi-automatic metadata generation driven by Language Technology Resources Lothar Lemnitzer Project review, Utrecht, 1 Feb 2007.

Similar presentations


Presentation on theme: "WP 2: Semi-automatic metadata generation driven by Language Technology Resources Lothar Lemnitzer Project review, Utrecht, 1 Feb 2007."— Presentation transcript:

1 WP 2: Semi-automatic metadata generation driven by Language Technology Resources Lothar Lemnitzer Project review, Utrecht, 1 Feb 2007

2 Our Background Experience in corpus annotation and information extraction from texts Experience in grammar development Experience in statistical modelling Experience in eLearning

3 WP2 Dependencies WP 1: collection and preparation of LOs WP 3: WP 2 results are input to this WP WP 4: Integration of tools WP 5: Evaluation and validation

4 M12 Deliverables Language Resources: > 200 000 running words per language With structural and linguistic annotation > 1000 manually selected keywords > 300 manually selected definitions Local grammars for definitory contexts

5 M12 Deliverables Documentation: Guidelines linguistic annotation Guidelines keyword annotation Guidelines annotation of definitions (Guidelines evaluation)

6 M12 Deliverables Tools: Prototype Keyword Extractor Prototype Glossary Candidate Detector

7 Lexikon CZ EN CONVERTOR 1 Documents SCORM Pseudo-Struct. Basic XML LING. PROCESSOR Lemmatizer, POS, Partial Parser CROSSLINGUAL RETRIEVAL LMS User Profile Documents SCORM Pseudo-Struct Metadata (Keywords) Ling. Annot XML Ontology CONVERTOR 2 Documents HTML Lexikon PT Lexikon RO Lexikon PL Lexicon GE Lexikon MT Lexikon BG Lexikon DT Lexicon EN PLGE BG PTMTDTRO EN Documents User (PDF, DOC, HTML, SCORM,XML) REPOSITORY Glossary

8 Linguistically annotated learning objects Structural annotation: par, s, chunk, tok Linguistic annotation: base, ctag, msd attributes  Example 1 Specific annotation: marked term, defining text

9 Part of the DTD <!ATTLIST markedTerm %a.ana; kw (y|n) "n" dt (y|n) "n" status CDATA #IMPLIED comment CDATA #IMPLIED > <!ATTLIST definingText id ID #IMPLIED xml:lang CDATA #IMPLIED lang CDATA #IMPLIED rend CDATA #IMPLIED type CDATA #IMPLIED wsd CDATA #IMPLIED def IDREF #IMPLIED continue CDATA#IMPLIED part CDATA #IMPLIED status CDATA #IMPLIED comment CDATA #IMPLIED >

10 Linguistically annotated learning objects Use: Linguistically annotated texts are input to the extraction tools Marked terms and defining texts are used as training material and / or as gold standard for the evaluation

11 Characteristics of keywords 1.Good keywords have a typical, non random distribution in and across LOs 2.Keywords tend to appear more often at certain places in texts (headings etc.) 3.Keywords are often highlighted / emphasised by authors

12 Distributional features of keywords We are using the following metrics to measure keywordiness by distribution Term frequency / inverse document frequency (tf*idf), Residual Inverse document frequency (RIDF) An adjusted version of RIDF (adjustment by term frequency) to model inter text distribution of KW Term burstiness to model intra text distribution of KW

13 Structural and layout features of keywords We will use: Knowledge of text structure used to identify salient regions (e.g., headings) Layout features of texts used to identify emphasised words (  Example 2) We will weigh words with such features higher

14 Complex keywords Complex, multi-word keywords are relevant, differences between languages The keyword extractor allows the extraction of n-grams of any length Evaluation showed that the including bi- or even trigrams word increases the results, with larger n-grams the performance begins to drop Maximum keyword length can be specified as a parameter

15 Complex keywords LanguageSingle-word keywords Multi-word keywords German91 %9 % Polish35 %65 %

16 Language settings for the keyword extractor Selection of single keywords is restricted to a few ctag categories and / or msd values (nouns, proper nouns, unknown words and some verbs for most languages) Multiword patterns are restricted wrt to position of function words (style of learning is acceptable; of learning behaviours is not)

17 Output of Keyword Extractor List, ordered by “keywordiness” value, with the elements Normalized form of keyword (Statistical figures) List of attested forms of the keyword  Example 3

18 Evaluation strategy We will proceed in three steps: 1.Manually assigned keywords will be used to measure precision and recall of key word extractor 2.Human annotators will judge results from extractor and rate them 3.The same document(s) will be annotated by many test persons in order to estimate inter-annotator agreement on this task

19 Summary With the keyword extractor, 1.We are using several known statistical metrics in combination with qualitative and linguistic features 2.We give special emphasis on multiword keywords 3.We evaluate the impact of these features on the performance of these tools for eight languages 4.We integrate this tool into an eLearning application 5.We have a prototype user interface to this tool

20 Identification of definitory contexts Empirical approach based on linguistic annotation of LO Workflow –Definitory contexts are searched and marked in LOs –Recurrent patterns are characterized quantitatively and qualitatively (  Example 4) –Local grammars are drafted on the basis of these recurrent patterns –Extraction of definitory context performed by lxtransduce (University of Edinburgh - LTG)

21 Characteristics of local grammars Grammar rules match and wrap subtrees of the XML document tree One grammar rule refers to subrules which match substructures Rules can refer to lexical list to constrain categories further The defined term should be identified and marked

22 Een vette letter is een letter die zwarter wordt afgedrukt dan de andere letters. Example

23 Een vette letter is een letter die zwarter wordt afgedrukt dan de andere letters.

24 Een vette letter is een letter die zwarter wordt afgedrukt dan de andere letters.

25 Een vette letter is een letter die zwarter wordt afgedrukt dan de andere letters.

26 Een vette letter is een letter die zwarter wordt afgedrukt dan de andere letters.

27 Output of Glossary candidate detector Ordered List of words Defined Term marked (Larger context – one preceding, one following sentence) (  Example 10)

28 Evaluation Strategy We will proceed in two steps: 1.Manually marked definitory contexts will be used to measure precision and recall of the glossary candidate detector 2.Human annotator to judge results from the glossary candidate detector and rate their quality / completeness

29 Results PrecisionRecall Own LOs 21.5 %34.9 % Verbs only34.1 %29.0 %

30 Evaluation Questions to be answered by a user-centered evaluation: Is there a preference for higher recall or for higher precision? Do user profit from seeing a larger context?

31 Integration of functionalities ILIAS Server Java Webserver (Tomcat) Application Logic User Interface KW/DC/Onto Java Classes / Data Webservices Axis nuSoap Servlets/JSP Development Server (CVS) KW/DC Code Code/Data Ontology Code ILIAS Content Portal LOs Access functionalities directly Evaluate functionalities in ILIAS Nightly Updates Use functionalities through SOAP Migration Tool Third Party Tools

32 User Interface Prototypes

33


Download ppt "WP 2: Semi-automatic metadata generation driven by Language Technology Resources Lothar Lemnitzer Project review, Utrecht, 1 Feb 2007."

Similar presentations


Ads by Google