Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Computational Tools for Linguists Inderjeet Mani Georgetown University

Similar presentations

Presentation on theme: "1 Computational Tools for Linguists Inderjeet Mani Georgetown University"— Presentation transcript:

1 1 Computational Tools for Linguists Inderjeet Mani Georgetown University

2 2 Topics Computational tools for -manual and automatic annotation of linguistic data -exploration of linguistic hypotheses Case studies Demonstrations and training Inter-annotator reliability Effectiveness of annotation scheme Costs and tradeoffs in corpus preparation

3 3 Outline Topics -Concordances -Data sparseness -Chomskys Critique -Ngrams -Mutual Information -Part-of-speech tagging -Annotation Issues -Inter-Annotator Reliability -Named Entity Tagging -Relationship Tagging Case Studies -metonymy -adjective ordering -Discourse markers: then -TimeML

4 4 Corpus Linguistics Use of linguistic data from corpora to test linguistic hypotheses => emphasizes language use Uses computers to do the searching and counting from on-line material -Faster than doing it by hand! Check? Most typical tool is a concordancer, but there are many others! Tools can analyze a certain amount, rest is left to human! Corpus Linguistics is also a particular approach to linguistics, namely an empiricist approach -Sometimes (extreme view) opposed to the rationalist approach, at other times (more moderate view) viewed as complementary to it -Cf. Theoretical vs. Applied Linguistics

5 5 Empirical Approaches in Computational Linguistics Empiricism – the doctrine that knowledge is derived from experience Rationalism: the doctrine that knowledge is derived from reason Computational Linguistics is, by necessity, focused on performance, in that naturally occurring linguistic data has to be processed -Naturally occurring data is messy! This means we have to process data characterized by false starts, hesitations, elliptical sentences, long and complex sentences, input that is in a complex format, etc. The methodology used is corpus-based -linguistic analysis (phonological, morphological, syntactic, semantic, etc.) carried out on a fairly large scale -rules are derived by humans or machines from looking at phenomena in situ (with statistics playing an important role)

6 6 Example: metonymy Metonymy: substituting the name of one referent for another -George W. Bush invaded Iraq -A Mercedes rear-ended me Is metonymy involving institutions as agents more common in print news than in fiction? -The X V reporting Lets start with: The X said - This pattern will provide a handle to identify the data

7 Page 7 Exploring Corpora Datasets Metonymy Test using Corpora _MST.html

8 8 The X said from Concordance data WordsFreq Freq/ M Words Fiction M6035 Fiction M Print News1.9M The preference for metonymy in print news arises because of the need to communicate Information from companies and governments.

9 9 Chomskys Critique of Corpus-Based Methods 1. Corpora model performance, while linguistics is aimed at the explanation of competence If you define linguistics that way, linguistic theories will never be able to deal with actual, messy data Many linguists dont find the competence-performance distinction to be clear-cut. Sociolinguists have argued that the variability of linguistic performance is systematic, predictable, and meaningful to speakers of a language. Grammatical theories vary in where they draw the line between competence and performance, with some grammars (such as Hallidays Systemic Grammar) organized as systems of functionally-oriented choices.

10 10 Chomskys Critique (concluded) 2. Natural language is in principle infinite, whereas corpora are finite, so many examples will be missed Excellent point, which needs to be understood by anyone working with a corpus. But does that mean corpora are useless? Introspection is unreliable (prone to performance factors, cf. only short sentences), and pretty useless with child data. Also, insights from a corpus might lead to generalization/induction beyond the corpus– if the corpus is a good sample of the text population 3. Ungrammatical examples wont be available in a corpus Depends on the corpus, e.g., spontaneous speech, language learners, etc. The notion of grammaticality is not that clear -Who did you see [pictures/?a picture/??his picture/*Johns picture] of? -ARG/ADJUNCT example

11 11 Which Words are the Most Frequent? Common Words in Tom Sawyer (71,730 words), from Manning & Schutze p.21 Will these counts hold in a different corpus (and genre, cf. Tom)? What happens if you have 8-9M words? (check usage demo!)

12 12 Data Sparseness Many low-frequency words Fewer high-frequency words. Only a few words will have lots of examples. About 50% of word types occur only once Over 90% occur 10 times or less. So, there is merit to Chomskys 2 nd objection Word FrequencyNumber of words of that frequency > Frequency of word types in Tom Sawyer, from M&S 22.

13 13 Zipfs Law: Frequency is inversely proportional to rank turned youll name comes group lead friends begin family brushed sins could applausive18000 Empirical evaluation of Zipfs Law on Tom Sawyer, from M&S 23.

14 14 Illustration of Zipfs Law (Brown Corpus, from M&S p. 30) logarithmic scale See also

15 15 Tokenizing words for corpus analysis 1. Break on -Spaces? inuo butta otokonokowa otooto da -Periods? (U.K. Products) -Hyphens? data-base = database = data base -Apostrophes? wont, couldnt, ORiley, cars 2. should different word forms be counted as distinct? -Lemma: a set of lexical forms having the same stem, the same pos, and the same word-sense. So, cat and cats are the same lemma. -Sometimes, words are lemmatized by stemming, other times by morphological analysis, using a dictionary and/or morphological rules 3. fold case or not (usually folded)? -The the THE Mark versus mark -One may need, however, to regenerate the original case when presenting it to the user

16 16 Counting: Word Tokens vs Word Types Word tokens in Tom Sawyer: 71,370 Word types: (i.e., how many different words) 8,018 In newswire text of that number of tokens, you would have 11,000 word types. Perhaps because Tom Sawyer is written in a simple style.

17 17 Inspecting word frequencies in a corpus Usage demo: -

18 18 Ngrams Sequences of linguistic items of length n See

19 19 A test for association strength: Mutual Information 1988 AP corpus; N=44.3M Data from (Church et al. 1991)

20 20 Interpreting Mutual Information High scores, e.g., strong supporter (8.85) indicates strongly associated in the corpus MI is a logarithmic score. To convert it, recall that X=2 log 2 X so, So this is 461 X chance. Low scores – powerful support (1.74): this is 3X chance, since I fxy fx fy x y ,428 powerful support I = log 2 (2N/1984*13428) = 1.74 So, doesnt necessarily mean weakly associated – could be due to data sparseness

21 21 Mutual Information over Grammatical Relations Parse a corpus Determine subject-verb-object triples Identify head nouns of subject and object NPs Score subj-verb and verb-obj associations using MI

22 22 Demo of Verb-Subj, Verb-Obj Parses Who devours or what gets devoured? Demo:

23 23 MI over verb-obj relations Data from (Church et al. 1991)

24 24 A Subj-Verb MI Example: Who does what in news? executivepolice politician reprimand 16.36shoot clamor conceal 17.46raid jockey bank 18.27arrest 17.96wrangle foresee 18.85detain 18.04woo conspire 18.91disperse 18.14exploit convene 19.69interrogate 18.36brand plead 19.83swoop 18.44behave sue 19.85evict 18.46dare answer 20.02bundle 18.50sway commit 20.04manhandle 18.59criticize worry 20.04search 18.60flank accompany 20.11confiscate 18.63proclaim own 20.22apprehend 18.71annul witness 20.28round 18.78favor Data from (Schiffman et al. 2001)

25 25 Famous Corpora Must see: Brown Corpus British National Corpus International Corpus of English Penn Treebank Lancaster-Oslo-Bergen Corpus Canadian Hansard Corpus U.N. Parallel Corpus TREC Corpora MUC Corpora English, Arabic, Chinese Gigawords Chinese, ArabicTreebanks North American News Text Corpus Multext East Corpus – 1984 in multiple Eastern/Central European langauges

26 26 Links to Corpora Corpora: -Linguistic Data Consortium (LDC) -Oxford Text Archive -Project Gutenberg -CORPORA list Other: -Chris Mannings Corpora Page - -Michael Barlows Corpus Linguistics page -Cathy Balls Corpora tutorial tml

27 27 Summary: Introduction Concordances and corpora are widely used and available, to help one to develop empirically-based linguistic theories and computer implementations The linguistic items that can be counted are many, but words (defined appropriately) are basic items The frequency distribution of words in any natural language is Zipfian -Data sparseness is a basic problem when using observations in a corpus sample of language Sequences of linguistic items (e.g., word sequences – n-grams) can also be counted, but the counts will be very rare for longer items Associations between items can be easily computed -e.g., associations between verbs and parser-discovered subjs or objs

28 28 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Inter-Annotator Reliability - Named Entity Tagging - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

29 29 Using POS in Concordances WordsFreq Freq/ Words Fiction 2000 N \bdeal_NN 1.5M Fiction 2000 VB 1.5M Gigaword N 10.5M Gigaword VB 10.5M deal is more often a verb In Fiction 2000 deal is more often a noun in English Gigaword deal is more prevalent in Fiction 2000 than Gigaword

30 30 POS Tagging – What is it? Given a sentence and a tagset of lexical categories, find the most likely tag for each word in the sentence Tagset – e.g., Penn Treebank (45 tags, derived from the 87-tag Brown corpus tagset) Note that many of the words may have unambiguous tags Example Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN

31 31 More details of POS problem How ambiguous? -Most words in English have only one Brown Corpus tag Unambiguous (1 tag) 35,340 word types Ambiguous (2- 7 tags) 4,100 word types = 11.5% - 7 tags: 1 word type still -But many of the most common words are ambiguous Over 40% of Brown corpus tokens are ambiguous Obvious strategies may be suggested based on intuition to/TO race/VB the/DT race/NN will/MD race/NN Sentences can also contain unknown words for which tags have to be guessed: Secretariat/NNP is/VBZ

32 32 Different English Part-of-Speech Tagsets Brown corpus - 87 tags -Allows compound tags I'm tagged as PPSS+BEM - PPSS for "non-3rd person nominative personal pronoun" and BEM for "am, 'm Others have derived their work from Brown Corpus -LOB Corpus: 135 tags -Lancaster UCREL Group: 165 tags -London-Lund Corpus: 197 tags. -BNC – 61 tags (C5) -PTB – 45 tags To see comparisons ad mappings of tagsets, go to

33 33 PTB Tagset (36 main tags + 9 punctuation tags)

34 34 PTB Tagset Development Several changes were made to Brown Corpus tagset: -Recoverability Lexical: Same treatment of Be, do, have, whereas BC gave each its own symbol - Do/VB does/VBZ did/VBD doing/VBG done/VBN Syntactic: Since parse trees were used as part of Treebank, conflated certain categories under the assumption that they would be recoverable from syntax - subject vs. object pronouns (both PP) - subordinating conjunctions vs. prepositions on being informed vs. on the table (both IN) - Preposition to vs. infinitive marker (both TO) -Syntactic Function BC: the/DT one/CD vs. PTB: the/DT one/NN BC: both/ABX vs. PTB: both/PDT the boys, the boys both/RB, both/NNS of the boys, both/CC boys and girls

35 35 PTB Tagging Process Tagset developed Automatic tagging by rule-based and statistical pos taggers Human correction using an editor embedded in Gnu Emacs Takes under a month for humans to learn this (at 15 hours a week), and annotation speeds after a month exceed 3,000 words/hour Inter-annotator disagreement (4 annotators, eight 2000-word docs) was 7.2% for the tagging task and 4.1% for the correcting task Manual tagging took about 2X as long as correcting, with about 2X the inter-annotator disagreement rate and an error rate that was about 50% higher. So, for certain problems, having a linguist correct automatically tagged output is far more efficient and leads to better reliability among linguists compared to having them annotate the text from scratch!

36 36 Automatic POS tagging

37 37 A Baseline Strategy Choose the most likely tag for each ambiguous word, independent of previous words -i.e., assign each token to the pos-category it occurred in most often in the training set E.g., race – which pos is more likely in a corpus? This strategy gives you 90% accuracy in controlled tests -So, this unigram baseline must always be compared against

38 38 Beyond the Baseline Hand-coded rules Sub-symbolic machine learning Symbolic machine learning

39 39 Machine Learning Machines can learn from examples Learning can be supervised or unsupervised Given training data, machines analyze the data, and learn rules which generalize to new examples Can be sub-symbolic (rule may be a mathematical function) –e.g. neural nets Or it can be symbolic (rules are in a representation that is similar to representation used for hand-coded rules) In general, machine learning approaches allow for more tuning to the needs of a corpus, and can be reused across corpora

40 40 A Probabilistic Approach to POS tagging What you want to do is find the best sequence of pos-tags C=C1..Cn for a sentence W=W1..Wn. -(Here C1 is pos_tag(W1)). In other words, find a sequence of pos tags C that maximizes P(C| W) Using Bayes Rule, we can say P(C| W) = P(W | C) * P(C) / P(W ) Since we are interested in finding the value of C which maximizes the RHS, the denominator can be discarded, since it will be the same for every C So, the problem is: Find C which maximizes P(W | C) * P(C) Example: He will race Possible sequences: -He/PP will/MD race/NN -He/PP will/NN race/NN -He/PP will/MD race/VB -He/PP will/NN race/VB W = W1 W2 W3 = He will race C = C1 C2 C3 -Choices: C= PP MD NN C= PP NN NN C = PP MD VB C = PP NN VB

41 41 Independence Assumptions P(C1….Cn) i=1, n P(Ci| Ci-1) -assumes that the event of a pos-tag occurring is independent of the event of any other pos-tag occurring, except for the immediately previous pos tag From a linguistic standpoint, this seems an unreasonable assumption, due to long-distance dependencies P(W1….Wn | C1….Cn) i=1, n P(Wi| Ci) -assumes that the event of a word appearing in a category is independent of the event of any other word appearing in a category Ditto However, the proof of the pudding is in the eating! -N-gram models work well for part-of-speech tagging

42 42 A Statistical Method for POS Tagging MD NN VB PRP he will race lexical generation probs he|PP 1 will|MD.8 race|NN.4 race|VB.6 will|NN | 1 lex(B) C|R MD NN VB PRP MD.4.6 NN.3.7 PP pos bigram probs Find the value of C1..Cn which maximizes: i=1, n P(Wi| Ci) * P(Ci| Ci-1) lexical generation probabilities Pos bigram probs

43 43 Finding the best path through an HMM Score(I) = Max J pred I [Score(J)* transition(I|J)]* lex(I) Score(B) = P(PP| )* P(he|PP) =1*.3=.3 Score(C)=Score(B) *P(MD|PP) * P(will|MD) =.3*.8*.8=.19 Score(D)=Score(B) *P(NN|PP) * P(will|NN) =.3*.2*.2=.012 Score(E) = Max [Score(C)*P(NN|MD), Score(D)*P(NN|NN)] *P(race|NN) = Score(F) = Max [Score(C)*P(VB|MD), Score(D)*P(VB|NN)]*P(race|VB)= he|PP 1 will|MD.8 race|NN.4 race|VB.6 will|NN | 1 A C B D E F lex(B) Viterbi algorithm

44 44 But Data Sparseness Bites Again! Lexical generation probabilities will lack observations for low- frequency and unknown words Most systems do one of the following -Smooth the counts E.g., add a small number to unseen data (to zero counts). For example, assume a bigram not seen in the data has a very small probability, e.g., Backoff bigrams with unigrams, etc. -Use lots more data (youll still lose, thanks to Zipf!) -Group items into classes, thus increasing class frequency e.g., group words into ambiguity classes, based on their set of tags. For counting, alll words in an ambiguity class are treated as variants of the same word

45 45 A Symbolic Learning Method HMMs are subsymbolic – they dont give you rules that you can inspect A method called Transformational Rule Sequence learning (Brill algorithm) can be used for symbolic learning (among other approaches) The rules (actually, a sequence of rules) are learnt from an annotated corpus Performs at least as accurately as other statistical approaches Has better treatment of context compared to HMMs -rules which use the next (or previous) pos HMMs just use P(Ci| Ci-1) or P(Ci| Ci-2Ci-1) -rules which use the previous (next) word HMMs just use P(Wi|Ci)

46 46 Brill Algorithm (Overview) Assume you are given a training corpus G (for gold standard) First, create a tag-free version V of it Notes: -As the algorithm proceeds, each successive rule becomes narrower (covering fewer examples, i.e., changing fewer tags), but also potentially more accurate -Some later rules may change tags changed by earlier rules 1. First label every word token in V with most likely tag for that word type from G. If this initial state annotator is perfect, youre done! 2. Then consider every possible transformational rule, selecting the one that leads to the most improvement in V using G to measure the error 3. Retag V based on this rule 4. Go back to 2, until there is no significant improvement in accuracy over previous iteration

47 47 Brill Algorithm (Detailed) 1. Label every word token with its most likely tag (based on lexical generation probabilities). 2. List the positions of tagging errors and their counts, by comparing with ground-truth (GT) 3. For each error position, consider each instantiation I of X, Y, and Z in Rule template. If Y=GT, increment improvements[I], else increment errors[I]. 4. Pick the I which results in the greatest error reduction, and add to output e.g., VB NN PREV1OR2TAG DT improves 98 errors, but produces 18 new errors, so net decrease of 80 errors 5. Apply that I to corpus 6. Go to 2, unless stopping criterion is reached Most likely tag: P(NN|race) =.98 P(VB|race) =.02 Is/VBZ expected/VBN to/TO race/NN tomorrow/NN Rule template: Change a word from tag X to tag Y when previous tag is Z Rule Instantiation to above example: NN VB PREV1OR2TAG TO Applying this rule yields: Is/VBZ expected/VBN to/TO race/VB tomorrow/NN

48 48 Example of Error Reduction From Eric Brill (1995): Computational Linguistics, 21, 4, p. 7

49 49 Example of Learnt Rule Sequence 1. NN VB PREVTAG TO - to/TO race/NN->VB 2. VBP VB PREV1OR20R3TAG MD -might/MD vanish/VBP-> VB 3. NN VB PREV1OR2TAG MD - might/MD not/MD reply/NN -> VB 4. VB NN PREV1OR2TAG DT -the/DT great/JJ feast/VB->NN 5. VBD VBN PREV1OR20R3TAG VBZ -He/PP was/VBZ killed/VBD->VBN by/IN Chapman/NNP

50 50 Handling Unknown Words Can also use the Brill method Guess NNP if capitalized, NN otherwise. Or use the tag most common for words ending in the last 3 letters. etc. Example Learnt Rule Sequence for Unknown Words

51 51 POS Tagging using Unsupervised Methods Reason: Annotated data isnt always available! Example: the can Lets take unambiguous words from dictionary, and count their occurrences after the -the.. elephant -the.. guardian Conclusion: immediately after the, nouns are more common than verbs or modals Initial state annotator: for each word, list all tags in dictionary Transformation template: Change tag of word to tag Y if the previous (next) tag (word) is Z, where is a set of 2 or more tags -Dont change any other tags

52 52 Error Reduction in Unsupervised Method Let a rule to change to Y in context C be represented as Rule(, Y, C). -Rule1: {VB, MD, NN} NN PREVWORD the -Rule2: {VB, MD, NN} VB PREVWORD the Idea: -since annotated data isnt available, score rules so as to prefer those where Y appears much more frequently in the context C than all others in frequency is measured by counting unambiguously tagged words so, prefer {VB, MD, NN} NN PREVWORD the to {VB, MD, NN} VB PREVWORD the since dict-unambiguous nouns are more common in a corpus after the than dict-unambiguous verbs

53 53 Summary: POS tagging A variety of POS tagging schemes exist, even for a single language Preparing a POS-tagged corpus requires, for efficiency, a combination of automatic tagging and human correction Automatic part-of-speech tagging can use -Hand-crafted rules based on inspecting a corpus -Machine Learning-based approaches based on corpus statistics e.g., HMM: lexical generation probability table, pos transition probability table -Machine Learning-based approaches using rules derived automatically from a corpus Combinations of different methods often improve performance

54 54 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Inter-Annotator Reliability - Named Entity Tagging - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

55 55 Adjective Ordering *A political serious problem *A social extravagant life *red lovely hair *old little lady *green little men Adjectives have been grouped into various classes to explain ordering phenomena

56 56 Collins COBUILD L2 Grammar qualitative < color < classifying Qualitative – expresses a quality that someone or something has, e.g., sad, pretty, small, etc. -Qualitative adjectives are gradable, i.e., the person or thing can have more or less of the quality Classifying – used to identify the class something belongs to, i.e.., distinguishing -financial help, American citizens. - Classifying adjectives arent gradable. So, the ordering reduces to -Gradable < color < non-gradable A serious political problem Lovely red hair Big rectangular green Chinese carpet

57 57 Vendler 68 A9 < A8 < …A2 < A1x

58 58 Other Adjective Ordering Theories Goyvaerts 68 quality < size/length/shape < age < color < naturally < style < general < denominal Quirck & Greenbaum 73 Intensifying perfect < general-measurable careful wealthy < age young old < color < denominal material woollen scarf < denominal style Parisian dress Dixon 82 value < dimension < physical property < speed < human propensity < age < color Frawley 92 value < size < color (English, German, Hungarian, Polish, Turkish, Hindi, Persian, Indonesian, Basque) Collins COBUILD: gradable < color < non-gradable Goyvaerts, Q&G, Dixon: size < age < color Goyvaerts, Q&G: color < denominal Goyvaerts, Dixon: shape < color

59 59 Testing the Theories on Large Corpora Selective coverage of a particular language or (small) set of languages Based on categories that arent defined precisely that are computable Based on small large numbers of examples Test gradable < color < non-gradable

60 60 Computable Tests for Gradable Adjectives Submodifiers expressing gradation -very|rather|somewhat|extremely A But what about very British? Periphrastic comparatives -more A than | "the most A Inflectional comparatives --er|-est

61 61 Challenges: Data Sparseness Data sparseness -Only some pairs will be present in a given corpus few adjectives on the gradable list may be present -Even fewer longer sequences will be present in a corpus Use transitivity? - small small < red < wooden?

62 62 Challenges: Tool Incompleteness Search pattern will return many non-examples -Collocations common or marked ones - American green card - national Blue Cross -Adjective Modification bright blue -POS-tagging errors -May also miss many examples

63 63 Results from Corpus Analysis G < C < not G generally holds However, there are exceptions -Classifying/Non-Gradable < Color After all, the maple leaf replaced the British red ensign as Canada's flag almost 30 years ago. where he stood on a stage festooned with balloons displaying the Palestinian green, white and red flag -Color < Shape paintings in which pink, roundish shapes, enriched with flocking, gel, lentils and thread, suggest the insides of the female body.

64 64 Summary: Adjective Ordering It is possible to test concrete predictions of a linguistic theory in a corpus-based setting The testing means that the machine searches for examples satisfying patterns that the human specifies The patterns can pre-suppose a certain/high degree of automatic tagging, with attendant loss of accuracy The patterns should be chosen so that they provide handles to identify the phenomena of interest The patterns should be restricted enough that the number of examples the human has to judge is not infeasible This is usually an iterative process

65 65 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Named Entity Tagging - Inter-Annotator Reliability - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

66 66 The Art of Annotation Define Goal 2. Eyeball Data (with the help of Computers) 3. Design Annotation Scheme 4. Develop Example-based Guidelines 5. Unless satisfied/exhausted, goto 1 6. WriteTraining Manuals 7. Initiate HumanTraining Sessions 8. Annotate Data / Train Computers Computers can also help with the annotation 9. Evaluate Humans and Computers 10. Unless satisfied/exhausted, goto 1

67 67 Annottation Methodology Picture Raw Corpus Annotated Corpus Initial Tagger Annotation Editor Annotation Guidelines Machine Learning Program Raw Corpus Learned Rules Annotated Corpus Rule Apply Knowledge Base?

68 68 Goals of an Annotation Scheme Simplicity – simple enough for a human to carry out Precision – precise enough to be useful in CLI applications Text-based – annotation of an item should be based on information conveyed by the text, rather than information conveyed by background information Human-centered – should be based on what a human can infer from the text, rather than what a machine can currently do or not do Reproducible – your annotation should be reproducible by other humans (i.e., inter-annotator agreement should be high) -obviously, these other humans may have to have particular expertise and training

69 69 What Should An Annotation Contain Additional Information about the text being annotated – e.g., EAGLES external and internal criteria Information about the annotator – who, when, what version of tool, etc. (usually in meta-tags associated with the text) The tagged text itself Example:

70 70 External and Internal Criteria (EAGLES) External: participants, occasion, social setting, communicative function -origin: Aspects of the origin of the text that are thought to affect its structure or content. -state: the appearance of the text, its layout and relation to non-textual matter, at the point when it is selected for the corpus. -aims: the reason for making the text and the intended effect it is expected to have. Internal: patterns of language use -Topic (economics, sports, etc.) -Style (formal/informal, etc.)

71 71 External Criteria – state (EAGLES) Mode -spoken participant awareness: surreptitious/warned/aware venue: studio/on location/telephone -written Relation to the medium -written: how it is laid out, the paper, print, etc. -spoken: the acoustic conditions, etc. Relation to non-linguistic communicative matter -diagrams, illustrations, other media that are coupled with the language in a communicative event. Appearance -e.g., advertising leaflets, aspects of presentation that are unique in design and are important enough to have an effect on the language.

72 72 Examples of annotation schemes (changing the way we do business!) POS tagging annotation POS tagging annotation – Penn Treebank Scheme Named entity annotation – ACE Scheme Phrase Structure annotation – Penn Treebank scheme Time Expression annotation – TIMEX2 Scheme Protein Name Annotation – GU Scheme Event Annotation – TimeML Scheme Rhetorical Structure Annotation - RST Scheme Coreference Annotation, Subjectivity Annotation, Gesture Annotation, Intonation Annotation, Metonymy Annotation, etc., etc. Etc. Several hundred schemes exist, for different problems in different languages

73 73 POS Tag Formats: Non-SGML – to SGML CLAWS tagger: non-SGML -What_DTQ can_VM0 CLAWS_NN2 do_VDI to_PRP Inderjeet_NP0 's_POS noonsense_NN1 text_NN1 ?_? Brill tagger: non-SGML -What/WP can/MD CLAWS/NNP do/VB to/TO Inderjeet/NNP 's/POS noonsense/NN text/NN ?/. Alembic POS tagger: - What can CLAWS do to Inderjeet ' s noonsense text ? Conversion to SGML is pretty trivial in such cases

74 74 SGML (Standard Generalized Markup Language) A general markup language for text -HTML is an instance of an SGML encoding Text Encoding Initiative (TEI): defines SGML schemes for marking up humanities text resources as well as dictionaries Examples: - Im really hungry right now. Oh, yeah? - That is an ugly couch. Note: some elements (e.g., ) can consist just of a single tag Character references: ways of referring to the non-ASCII characters using a numeric code -å (this is in decimal) å (this is in hexadecimal) å Entity references: are used to encode a special character or sequence of characters via a symbolic name - résumé.; -&docdate;

75 75 DTDs A document type definition, or DTD, is used to define a grammar of legal SGML structures for a document -e.g., para should consist of one or more sentences and nothing else SGML parser verifies that document is compliant with DTD DTDs can therefore be used for XML as well DTDs can specify what attributes are required, in what order, what their legit values are, etc. The DTDs are often ignored in practice! DTD: XML: &writer;

76 76 XML Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML. Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. Defines a simplified subset of SGML, designed especially for Web applications Unlike HTML, separates out display (e.g., XSL) from content (XML) Example What can Makes use of DTDs, but also RDF Schemas

77 77 RDF Schemas Example of Real RDF Schema: TimeML.xsd (see EVENT tag and attributes)

78 78 Inline versus Standoff Annotation Usually, when tags are added, an annotation tool is used, to avoid spurious insertions or deletions The annotation tool may use inline or standoff annotation Inline – tags are stored internally in (a copy of) the source text. -Tagged text can be substantially larger than original text -Web pages are a good example – i.e., HTML tags Standoff – tags are stored internally in separate files, with information as to what positions in the source text the tags occupy -e.g., PERSON However, the annotation tool displays the text as if the tags were in-line

79 79 Summary: Annotation Issues A best-practices methodology is widely used for annotating corpora The annotation process involves computational tools at all stages Standard guidelines are available for use To share annotated corpora (and to ensure their survivability), it is crucial that the data be represented in a standard rather than ad hoc format XML provides a well-established, Web-compliant standard for markup languages DTDs and RDF provide mechanisms for checking well- formedness of annotation

80 80 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Inter-Annotator Reliability - Named Entity Tagging - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

81 81 Background Deborah Schiffrin. Anaphoric then: aspectual, textual, and epistemic meaning. Linguistics 30 (1992), Schiffrin xamines uses of then in data elicited via 20 sociolinguistic interviews, each an hour long Distinguishes two anaphoric temporal senses, showing that they are differentiated by clause position Shows that they have systematic effects on aspectual interpretation A parallel argument is made for two epistemic temporal senses

82 82 Schiffrin: Temporal and Non-Temporal Senses Anaphoric Senses - Narrative temporal sense (shifts reference time) And then I uh lived there until I was sixteen - Continuing Temporal sense (continues a previous reference time) I was only a little boy then. Epistemic senses - Conditional sentences (rare, but often have temporal antecedents in her data) But if I think about it for a few days -- well, then I seem to remember a great deal …if Im still in the situation where I am now….Im, not gonna have no more then - Initiation-response-evaluation sequences (in that case?) Freda: Do y still need the light? Debby: Um. Freda Wll have t go in then. Because the bugs are out.

83 83 Schiffrins Argument (Simplified) and Its Test Shifting RT thens (call these Narrative) & then in if-then conditionals - similar semantic function - mainly clause-initial Continuing RT thens (call these Temporal) & IRE thens - similar semantic function - mainly clause final - stative verb more likely (since RT overlaps, verbs conveying duration are expected) Call the rest Other - isnt differentiated into if-then versus IRE - So, only part of her claims tested

84 84 So, What do we do Then? Define environments of interest, each one defined by a pattern For each environment 1.Find examples matching the pattern 2.If classifying the examples is manageable, carry it out and stop 3.Otherwise restrict the environment by adding new elements to the pattern, and go back to 1 So, for each final environment, we claim that X% of the examples in that environment are of a particular class Initial then Pattern: (^|_CC|_RB)\s*then\w+\s+\w Final then Pattern: [^\,]\s+then[\.\?\'\;\!\:]

85 85 Exceptions Non-Narrative Initial then then there [be] then come then again then and now only then even then so then Non-Temporal Final then What then? All right/OK [,] then And then?

86 86 Results Written Fiction 2000Spoken Broadcast News Written Gigaword News TNOTNOTNO Clause Initial 1.73 (23/13 22) (1276/ 1322) 1.58 (21/13 22).73 (6/81 8) 93.88( 768/81 8) 5.3 (44/81 8) 3.64 (27/740 ) (562/74 0) (151/74 0) Clause Final71.81 (79/11 0) 2.72 (3/110) (28/11 0) (61/8 4) 5.95 (5/84) (18/84) (179/19 2) (13/192 ) Other is a presence in final position in fiction and broadcast news, and in initial position in print news. Is this real or artifact of catch-all class? Conclusion: only part of her claims tested. But those claims are borne out across three different genres and much more data!

87 87 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Inter-Annotator Reliability - Named Entity Tagging - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

88 88 Considerations in Inter-Annotator Agreement Size of tagset Structure of tagset Clarity of Guidelines Number of raters Experience of raters Training of raters - Independent ratings (preferred) - Consensus (not preferred) Exact, partial, and equivalent matches Metrics Lessons Learned: Disagreement patterns suggest guideline revisions

89 89 Protein Names Considerable variability in the forms of the names Multiple naming conventions Researchers may name a newly discovered protein based on - function - sequence features - gene name - cellular location - molecular weight - discoverer - or other properties Prolific use of abbreviations and acronyms fushi tarazu 1 factor homolog Fushi tarazu factor (Drosophila) homolog 1 FTZ-F1 homolog ELP steroid/thyroid/retinoic nuclear hormone receptor homolog nhr-35 V-INT 2 murine mammary tumor virus integration site oncogene homolog fibroblast growth factor 1 (acidic) isoform 1 precursor nuclear hormone receptor subfamily 5, Group A, member 1

90 90 Guidelines v1 TOC

91 91 Agreement Metrics ReferenceCandidate YesNo Yes TP FN NoFP TN MeasureDefinition Percentage Agreement 100*(TP+TN)/ (TP+FP+TN+FN) PrecisionTP/(TP+FP) RecallTP/(TP+FN) (Balanced) F- Measure 2*Precision*Recall/(Precisio n+Recall)

92 92 Example for F-measure: Scorer Output (Protein Name Tagging) REFERENCE CANDIDATE CORR FTZ-F1 homolog ELP | FTZ-F1 homolog ELP INCO M2-LHX3 | M2 SPUR | - SPUR | LHX3 Precision = ¼ = 0.25 Recall = ½ = 0.5 F-measure = 2 * ¼ * ½ / ( ¼ + ½ ) = 0.33

93 93 The importance of disagreement Measuring inter-annotator agreement is very useful in debugging the annotation scheme Disagreement can lead to improvements in the annotation scheme Extreme disagreement can lead to abandonment of the scheme

94 94 V2 Assessment (ABS2) Old Guidelines - protein 0.71 F - acronym 0.85 F - array-protein 0.15 F New Guidelines - protein 0.86 F - long-form 0.71 F these are only ~4% of tags Coder s Corre ct Precisio n Recall F- me a- sure A1-A A1-A A3- A Avera ge A1-A A1-A A3- A Avera ge

95 95 TIMEX2 Annotation Scheme Time Points the third week of October Durations half an hour long Indexicality tomorrow Sets every Tuesday Fuzziness Summer of 1990 This morning Non-specificity April is usually wet. For guidelines, tools, and corpora, please see

96 96 TIMEX2 Inter-Annotator Agreement 193 NYT news docs 5 annotators 10 pairs of annotators Human annotation quality is acceptable on EXTENT and VAL Poor performance on Granularity and Non-Specific But only a small number of instances of these (200 ~ 6000) Annotators deviate from guidelines, and produce systematic errors (fatigue?) several years ago: PXY instead of PAST_REF all day: P1D instead of YYYY-MM-DD

97 97 TempEx in Qanda

98 98 Summary: Inter-Annotator Reliability Theres no point going on with an annotation scheme if it cant be reproduced There are standard methods for measuring inter-annotator reliability An analysis of inter-annotator disagreements is critical for debugging an annotation scheme

99 99 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Inter-Annotator Reliability - Named Entity Tagging - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

100 100 Information Extraction Types - Flag names of people, organizations, places,… - Flag and normalize time expressions, phrases such as time expressions, measure phrases, currency expressions, etc. - Group coreferring expressions together - Find relations between named entities (works for, located at, etc.) - Find events mentioned in the text - Find relations between events and entities - A hot commercial technology! Example patterns: - Mr. ---, -, Ill.

101 101 Message Understanding Conferences (MUCs) Idea: precise tasks to measure success, rather than test suite of input and logical forms. MUC and MUC messages about navy operations MUC and MIC news articles and transcripts of radio broadcasts about terrorist activity MUC news articles about joint ventures and microelectronics MUC news articles about management changes, + additional tasks of named entity recognition, coreference, and template element MUC – mostly multilingual information extraction Has also been applied to hundreds of other domains - scientific articles, etc., etc.

102 102 Historical Perspective Until MUC-3 (1993), many IE systems used a Knowledge Engineering approach -They did something like full chart parsing with a unification-based grammar with full logical forms, a rich lexicon and KB -E.g., SRIs Tacitus Then, they discovered that things could work much faster using finite-state methods and partial parsing And that using domain-specific rather than general purpose lexicons simplified parsing (less ambiguity due to fewer irrelevant senses) And that these methods worked even better for the IE tasks -E.g., SRIs Fastus, SRAs Nametag Meanwhile, people also started using statistical learning methods from annotated corpora -Including CFG parsing

103 103 An instantiated scenario template Wall Street Journal, 06/15/88 MAXICARE HEALTH PLANS INC and UNIVERSAL HEALTH SERVICES INC have dissolved a joint venture which provided health services. Source

104 104 Templates Can get Complex! (MUC-5)

105 Automatic Content Extraction (ACE) Program: Entity Types Person Organization (Place) -Location – e.g., geographical areas, landmasses, bodies of water, geological formations -Geo-Political Entity – e.g., nations, states, cities Created due to metonymies involving this class of places The riots in Miami Miami imposed a curfew Miami railed against a curfew Facility – buildings, streets, airports, etc.

106 106 ACE Entity Attributes and Relations Attributes -Name: An entity mentioned by name -Pronoun -Nominal Relations -AT: based-in, located, residence -NEAR: relative-location -PART: part-of, subsidiary, other -ROLE: affiliate-partner, citizen-of, client, founder, general- staff, manager, member, owner, other -SOCIAL: associate, grandparent, parent, sibling, spouse, other-relative, other-personal, other-professional

107 107 Designing an Information Extraction Task Define the overall task Collect a corpus Design an Annotation Scheme - linguistic theories help Use Annotation Tools -- authoring tools --automatic extraction tools Apply to annotation to corpus, assessing reliability Use training portion of corpus to train information extraction (IE) systems Use test portion to test IE systems, using a scoring program

108 108 Annotation Tools Specialized authoring tools used for marking up text without damaging it Some tools are tied to particular annotation schemes

109 109 Annotation Tool Example: Alembic Workbench

110 110 Callisto (Java successor to Alembic Workbench)

111 Page 111 Relationship Annotation: Callisto

112 112 Steps in Information Extraction Tokenization -Language Identification -Document Zoning -Sentence and Word Tokenization Morphological and Lexical Processing - Tagging entities of interest - Specific trigger lexicons -Dealing with unknown words - Part-of-Speech Tagging -Word-Sense Tagging -Morphological Analysis Parsing -Finite-State Parsing (usually just chunking) Domain Semantics -Coreference -Merging Partial Results

113 113 Morphological Analysis Inflectional morphology, mostly For simple languages (English, Japanese) – simple inflectional module suffices For more complex languages (Spanish) – a finite-state transducer is used For morphologically very complex languages (Arabic, Hebrew) – complex finite state transducer architectures For languages with productive noun compounding (German) – specialized module needed

114 114 Finite-State Parsing for IE A.C. Nielesen CO. NG said VG George Garrick NG, 40 years old, president NG of information Resources Inc. NG 's London- based European Information Services operation NG, will become VG president NG and chief operating officer NG of Nielsen Marketing Research USA NG, a unit NG of Dun & Bradstree Corp. NG First find NG, VG, particles; ignore PP attachment; ignore clause boundaries; maybe ignore modifiers that arent domain-relevant Later transducers handle more complex phenomena: -relative clauses (e.g., look for second verb for marking end of rc; subject relatives: associate subject with first and second verb; object relatives: associate object with head noun before rel mod) -general clause segmentation -coordination -appositives -PP argument attachment (only for verbs important in domain whose subcat info is provided – rest are adverbial adjuncts)

115 115 Example Text Processing KEY: Trigger word tagging Named Entity tagging Chunk parsing: NGs, VGs, preps, conjunctions Bridgestone Sports Co. said Friday it has set up a joint venture in Taiwan with a local concern and a Japanese trading house to produce golf clubs to be shipped to Japan. Company NG Set-UP VG Joint- Venture NG with Company NG Produce VG Product NG The joint venture, Bridgestone Sports Taiwan Cp., capitalized at 20 million new Taiwan dollars, will start production in January 1990 with production of 20,000 iron and metal wood clubs a month.

116 116 Merging Structures Activity: Type: PRODUCTION Company: Product: golf clubs Start-date: Bridgestone Sports Co. said Friday it has set up a joint venture in Taiwan with a local concern and a Japanese trading house to produce golf clubs to be shipped to Japan. Activity: Type: PRODUCTION Company: Bridgestone Sports Taiwan Co Product: iron and metal wood clubs Start-date: DURING 1990 The joint venture, Bridgestone Sports Taiwan Cp., capitalized at 20 million new Taiwan dollars, will start production in January 1990 with production of 20,000 iron and metal wood clubs a month.

117 117 Coreference Coreference means establishing referential relations between expressions. -Pronouns..Mr. Gates …he, the testimony….it -Definite NPs Microsoft….the company -Indefinite NPs the building…an apartment -Proper Names Bill Gates…William Gates…. Mr. Gates -Temporal Expressions today, three weeks from Monday -Headless Determiners all, the one, five -Prenominals aluminum siding …the price of aluminum -Events they attacked at dawn…the attack Types of relationships: -Identity, Part-whole -Set-subset the jurors…five …. -Set-member the jurors…on

118 118 Statistical Named Entity Tagging Typically, treat it as a word-level tagging problem -To get phrase-level tags, one could greedily concatenate adjacent tags this will fail to separate like tags Approaches can separately model words at start, end, or middle of name - BBN Identifinder does that P(C|W) = P(W, C)/P(W) argmax C P(W, C) P(C i |C i-1, w i-1 ) first word in a name * P( i=first |C i, C i-1 ) first word in a name * P( i | i-1, C i ) all but the first word in a name Word features f includes information about capitalization, initials, etc.

119 119 Information Extraction Metrics Precision: Correct Answers/Answers Produced Recall: Correct Answers /Total Possible Correct F-measure - uses a parameter to weight precision versus recall ( =1 for balance) F = (B 2 +1) PR / B 2 (P+R) F =.6 for the relationship/event extraction task (ceiling) in MUC F =.95+ for named entity task in MUC =.8 or so for coreference task

120 120 IE and QA Evaluations Current status for various information extraction and question-answering components Event extraction Names in English Names in Japanese Names in Chinese Question Answering Relations Names from 0% 15% word error

121 121 Summary: Information Extraction A variety of IE tasks and methods are available Named entities, relations, and event templates can be filled, as well as coreference relations Linguistic information used can be hand-crafted or corpus-based Domain knowledge, where needed, is hand-crafted Performance on names is better than on relations, while deep templates have shown a 60% ceiling effect

122 122 Outline Topics - Concordances - Data sparseness - Chomskys Critique - Ngrams - Mutual Information - Part-of-speech tagging - Annotation Issues - Inter-Annotator Reliability - Named Entity Tagging - Relationship Tagging Case Studies - metonymy - adjective ordering - Discourse markers: then - TimeML

123 123 Motivation for Temporal Information Extraction Story Understanding - Question-answering - Summarization Focus on temporal aspects of narrative

124 124 Chronology of The Marathon (mini-story) Yesterday Holly was running a marathon when she twisted her ankle. David had pushed her run twist ankle during finishes or during before push before during 1. When did the running occur? Yesterday. 2. When did the twisting occur? Yesterday, during the running. 3. Did the pushing occur before the twisting? Yes. 4. Did Holly keep running after twisting her ankle? 5. Maybe not????

125 125 Factors influencing Event Ordering (1) Max entered the room. He had drunk a lot of wine. TENSE: Past perfect indicates drinking precedes entering. (2) Max entered the room. Mary was seated behind the desk. ASPECT: State of being seated overlaps with entering. (3) He had borrowed some shirts from local villagers after his backpack went down. TEMPORAL MODIFIER: Going down precedes borrowing, based on temporal adverbial after (4) Iraq was defeated during the Gulf War. In ancient times it was the cradle of civilization. TIMEX: Being the cradle precedes being defeated, based on explicit time expression. (5) Max stood up. John greeted him. NARR_CONVENTION: Narrative convention applies, with standing up preceding greeting (6) Max fell. John pushed him. DISCOURSE_REL: Narrative convention overridden, based on Explanation relation (7) A drunken man died in the central Philippines when he put a firecracker under his armpit. DISCOURSE_REL: dying after putting, with temporal modifier used to instantiate Explanation relation (8) U.N. Secretary- General Boutros Boutros-Ghali Sunday opened a meeting of.... Boutros-Ghali arrived in Nairobi from South Africa, accompanied by Michel... WORLD KNOWLEDGE: arrival at the place of a meeting precedes opening a meeting

126 126 Whats Needed for Computing Chronologies? Representation of tense and aspect Representation of events and time Linking of events and time Result: a temporal constraint network - Here, both events and times are represented as pairs of points (nodes) - Ordering relations (edges) are <, = Chronology Time Event -participants run during run < x1x2 y1y2 > Yesterday, Holly was running …. < < [Verhagen 2004]

127 127 TimeML Annotation TimeML is a proposed metadata standard for markup of events and their temporal anchoring and ordering Consists of EVENT tags, TIMEX3 tags, and LINK tags - EVENTS are grouped into classes and have tense and aspect features - LINKS include overt and covert links Can be within or across sentences

128 128 How TimeML Differs from Previous Markups Extends TIMEX2 annotation to TIMEX3 - Temporal Functions: three years ago - Anchors to events and other temporal expressions: three years after the Gulf War - Addresses problem with Granularity/Periodicity: three days every month - Inserts start/end points for Durations: two weeks from June 7 Identifies signals determining interpretation of temporal expressions; - Temporal Prepositions: for, during, on, at; - Temporal Connectives: before, after, while. Identifies event expressions; - tensed verbs; has left, was captured, will resign; - stative adjectives; sunken, stalled, on board; - event nominals; merger, Military Operation, Gulf War; Creates dependencies between events and times: - Anchoring; John left on Monday. - Orderings; The party happened after graduation. - Embedding; John said Mary left.

129 129 TLINK TLINK or Temporal Link represents the temporal relationship holding between events or between an event and a time, and establishes a link between the involved entities, making explicit if they are: Simultaneous (happening at the same time) Identical: (referring to the same event) John drove to Boston. During his drive he ate a donut. One before the other: The police looked into the slayings of 14 women.In six of the cases suspects have already been arrested. One immediately before the other: All passengers died when the plane crashed into the mountain. One including the other: John arrived in Boston last Thursday. One holding during the duration of the other: One being the beginning of the other: John was in the gym between 6:00 p.m. and 7:00 p.m. One being the ending of the other: John was in the gym between 6:00 p.m. and 7:00 p.m.

130 130 SLINK SLINK or Subordination Link is used for contexts introducing relations between two events, or an event and a signal, of the following sort: Modal: Relation introduced mostly by modal verbs (should, could, would, etc.) and events that introduce a reference to a possible world --mainly I_STATEs: John should have bought some wine. Mary wanted John to buy some wine. Factive: Certain verbs introduce an entailment (or presupposition) of the argument's veracity. They include forget in the tensed complement, regret, manage: John forgot that he was in Boston last year. Mary regrets that she didn't marry John. Counterfactive: The event introduces a presupposition about the non-veracity of its argument: forget (to), unable to (in past tense), prevent, cancel, avoid, decline, etc. John forgot to buy some wine. John prevented the divorce. Evidential: Evidential relations are introduced by REPORTING or PERCEPTION: John said he bought some wine. Mary saw John carrying only beer. Negative evidential: Introduced by REPORTING (and PERCEPTION?) events conveying negative polarity: John denied he bought only beer. Negative: Introduced only by negative particles (not, nor, neither, etc.), which will be marked as SIGNALs, with respect to the events they are modifying: John didn't forget to buy some wine. John did not want to marry Mary.

131 131 Role of the machine in human annotation In cases of dense annotation (events, pos tags, word-sense tags, etc.), it can be too tedious for a human to annotate everything In such cases, its helpful to have a computer program pre- annotate the data that the human then corrects The machine can also interact to flag invalid entries The machine can also provide visualization The machine can also augment the annotation with information that can be inferred

132 132 Annotating Chronology in The Marathon

133 133 Pre-Closure

134 134 Post-Closure

135 135 Automatic TIMEX2 tagging

136 136 TimeML Annotation Issues Problems Weaknesses in guidelines -Links between subordinate clause and main clause of same/diff sentence -Difficulties in annotating states Granularity of temporal relations (72% agreement on temporal relations on common links) Density of links. Number of links is quadratic in the number of events, but less than half the eventualities are linked. So, inter-annotator agreement on links likely to be low. Solutions Adding more annotation conventions Lightening the annotation. Expanding annotation using temporal reasoning. Using heavily mixed- initiative approach Providing user with visualization tools during annotation. Note: such problems are characteristic of semantic and discourse-level annotations!

137 137 TimeBank Browser and TimeML tools

138 Page 138 Strategy for Automatically Inferring Linguistic Information Develop a corpus of TimeML annotated documents - TimeML represents temporal adverbials, tense, grammatical aspect, temporal relations - Takes into account subordination and (to an extent) vagueness - Work on metric constraints for durations of states is ongoing (Hobbs) Develop initial computer taggers to tag Events, Times, and Links in the corpus Correct the corpus using a human Ensure that the annotations can be reproduced accurately - Inter-annotator reliability Use the corpus to train improved computer taggers

139 At the Florists (mini-story) a. John went into the florist shop. b. He had promised Mary some flowers. c. She said she wouldnt forgive him if he forgot. d. So he picked out three red roses. From (Webber 1988)

140 Chronology of At the Florists

141 At the Florists : A Rhetorical Structure Theory account Assumes abstract nodes which are Rhetorical Relations Rhetorical relation annotations are not easily reproduced –question of inter- annotator reliability Narration Explanation Ed Ea Elaboration Narration Eb Ec

142 Temporal Relations as Surrogates for Rhetorical Relations When E1 is left-sibling of E2 and E1 < E2, then typically, Narration(E1, E2) When E1 is right-sibling of E2 and E1 < E2, then typically Explanation(E2, E1) When E2 is a child node of E1, then typically Elaboration(E1, E2) constraints: {Eb < Ec, Ec < Ea, Ea < Ed} a. John went into the florist shop. b. He had promised Mary some flowers. c. She said she wouldnt forgive him if he forgot. d. So he picked out three red roses. Narr Elab Expl

143 Temporal Discourse Model Annotation Conventions 1.Each tree is rooted in an abstract node. 2.In the absence of any temporal adverbials or discourse markers, a tense shift will license the creation of an abstract node, with the tense shifted event being the leftmost daughter of the abstract node. The abstract node will then be inserted as the child of the immediately preceding text node. 3.In the absence of temporal adverbials and discourse markers, a stative event will always be placed as a child of the immediately preceding text event when the latter is non-stative, and as a sibling of the previous event when the latter is stative (as in a scene-setting fragment of discourse).

144 Representing States Approach: Minimality –A tensed stative predicate is represented as a node in the tree (progressives are treated as stative). John walked home. He was feeling great. –We represent the state of feeling great as being minimally a part of the event of walking, without committing to whether it extends before or after the event –A constraint is added to C indicating that this inclusion is minimal. Problem: Incompleteness Max entered the room. He was wearing a black shirt The system will not know whether the shirt was worn after he entered the room.

145 TDMs and DRT E a E b E c x y z t1 t2 t3 [ enter(E a, x, theWhiteHart) & man (x) & PROG(wear(E b, x, y) & black- jacket(y) &serve(E c, Bill, x, z) & beer(z) & t1 < n & E a t1 & t2 < n & E b o t2 & E b E a & t3 < n & E c t3 & E a < E c ]

146 Whats Needed for Computing TDMs? A Corpus of TDMs, annotated with high inter-annotator reliability Syntactic parsers for TDMs, trained on the corpus

147 147 Conclusion There are lots of computational tools for manual and automatic annotation of linguistic data and exploration of linguistic hypotheses The automatic tools arent perfect, but neither are humans! An annotation scheme must be tested using guidelines and inter- annotator reliability Annotations must be prepared and used within standard XML- based frameworks There are many costs and tradeoffs in corpus preparation The yields can considerably speed up the pace of linguistic research

148 148 Desiderata for Indian Language Work The data needs to be encoded using standard character encoding schemes – UNICODE, or else ISCII Annotation needs to follow the best-practices methodology, including proof of replicability, and XML representation Experience has shown that linguists and computer scientists can work in synergy on this Once corpora are prepared according to these guidelines, automatic tools can be developed in India and abroad and used to improve linguistic processing of Indian languages - Morphological analyzers, stemmers, etc. - Part-of-speech taggers - Syntactic Parsers - Word-Sense Disambiguators - Temporal Taggers - Information Extraction Systems - Text Summarizers - Statistical MT Systems - etc.

149 Free Resources (contact me) TIMEX2 corpora and tools: (English, Korean, Spanish) TimeML and annotation tools: AQUAINT corpus, and TimeML software: watch this space PRONTO and iprolink corpora, guidelines, tagsets (see my web site)

150 The Changing Environment If statistical rules induced from examples perform just as well as rules derived from intuition, then this suggests that probabilistic statistical linguistic rules might help explain or model human linguistic behavior. It also suggests that humans might learn from experience by means of induction using statistical regularities. For many years, corpus linguistic research rarely examined statistics above the level of words, due to the lack of availability of broad-coverage parsers and statistical models that could handle syntax and other levels of hidden structure (Manning 2003). The present climate with plenty of tools and statistical models, should allow corpus linguistics to extend its descriptive and explanatory scope dramatically.

151 151 Ngrams Details Consider a sequence of words W1…Wn, I saw a rabbit. Whats P(W1…Wn)? Note that we cant find sequences of length n, and count them - there wont be enough data. Chain Rule of probability: P(W1,..,Wn ) = P(W1)P(W2|W1) P(W3|W1,W2)..P(Wn|W1,W2,..,Wn-1 ) - But you still have the problem of lacking enough data Bigram model - Approximates P(Wn|W1…Wn-1) by P(Wn|Wn-1) - Assumes the probability of a word depends just on the previous word. This means, that you dont have to look back more than one word. - P(I saw a rabbit) = P(I| )*P(saw|I)*P(a|saw)*P(rabbit|a) - More generally: P(W1….Wn) i=1, n P(Wi| Wi-1) A trigram model, would look 2 words back into the past - P(I saw a rabbit) =P(saw| I)*P(a| I saw)*P(rabbit|saw a)

152 152 POS Tagging Based on N-grams Problem: Find C which maximizes P(W | C) * P(C) Here W=W1..Wn and C=C1..Cn (these were sequences, remember?) P(W1,..,Wn ) = P(W1)P(W2|W1) P(W3|W1,W2)..P(Wn|W1,W2,..,Wn-1 ) - Using the bigram model, we get: P(W1….Wn | C1….Cn) i=1, n P(Wi| Ci) P(C1….Cn) i=1, n P(Ci| Ci-1) So, we want to find the value of C1..Cn which maximizes: i=1, n P(Wi| Ci) * P(Ci| Ci-1) lexical generation probabilities, estimated from training data pos bigram probs, estimated from training data

153 153 Problems in Event Anchoring States - John walked home. He was feeling great. How long does feeling great last? - => We need a minimal duration for states - a. Mary entered the Presidents Office.b. A copy of the budget was on the presidents desk. c. The presidents financial advisor stood beside it. d. The president sat regarding both admiringly. e. The advisor spoke. (Dowty 1986) Was the budget on the desk before she entered the office? - => perceived scene presents an imperfective view of states, not indicating their true onsets Vagueness The attack lasted 2-3 weeks. Recently, Holly turned 16. Next summer, Holly may run Three days later, David pushed her - => temporal reasoning has to deal with vagueness

154 154 Problems in Event Anchoring (contd) Vagueness (contd) - John hurried to Marys house after work. But Mary had already left for dinner. - => we need to track reference time and decide when reference times coincide Modality - John should have brought some wine. Did he bring wine? No. - John prevented the divorce. Did the divorce happen? No. => we need to know about subordination Implicit Information Yesterday, Holly fell. (implicit on) Holly fell. David pushed her. (implicit because) => we need discourse modeling

Download ppt "1 Computational Tools for Linguists Inderjeet Mani Georgetown University"

Similar presentations

Ads by Google