Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ch 10 Part-of-Speech Tagging Edited from: L. Venkata Subramaniam February 28, 2002.

Similar presentations


Presentation on theme: "Ch 10 Part-of-Speech Tagging Edited from: L. Venkata Subramaniam February 28, 2002."— Presentation transcript:

1 Ch 10 Part-of-Speech Tagging Edited from: L. Venkata Subramaniam February 28, 2002

2 Part-of-Speech Tagging Tagging is the task of labeling (or tagging) each word in a sentence with its appropriate part of speech. The[AT] representative[NN] put[VBD] chairs[NNS] on[IN] the[AT] table[NN]. Tagging is a case of limited syntactic disambiguation. Many words have more than one syntactic category. Tagging has limited scope: we just fix the syntactic categories of words and do not do a complete parse.

3 Information Souces in Tagging How do we decide the correct POS for a word? Syntagmatic Information : Look at tags of other words in the context of the word we are interested in. Lexical Information : Predicting a tag based on the word concerned. For words with a number of POS, they usually occur used as one particular POS.

4 Markov Model Taggers We look at the sequence of tags in a text as a Markov chain. Limited horizon. P(X i+1 = t j |X 1,…,X i ) = P(X i+1 = t j |X i ) Time invariant (stationary). P(X i+1 = t j |X i ) = P(X 2 = t j |X 1 )

5 The Visible Markov Model Tagging Algorithm The MLE of tag t k following tag t j is obtained from training corpora: a kj =P(t k |t j ) = C(t k, t j )/C(t j ). The probability of a word being emitted by a tag: b kjl P(w l |t j ) = C(w l, w j )/C(w j ). The best tagging sequence t 1,n for a sentence w 1,n : arg max P(t 1,n | w 1,n ) = arg max ________________ P(w 1,n ) t 1,n P (w 1,n | t 1,n )P(t 1,n ) = arg max P (w 1,n | t 1,n )P(t 1,n ) t 1,n = P P (w i | t i )P(t i |t i-1 ) n i=1

6 The Viterbi Algorithm We determine the optimal tags for a sentence 1. comment: Given a sentence of length n 2. Comment: Initialization 3. d 1 (PERIOD)=1.0 4. d 1 (t)=0.0 for t!= PERIOD 5. comment: Induction 6. for I:=1 to n step 1 do 7. for all tags t j do 8. d i+1 (t j ):=max 1[ k [ T [d i (t k ) P(w i+1 |t j ) P(t k | t k )] 9. Y i+1 (t j ):=argmax 1[ k [ T [d i (t k ) P(w i+1 |t j ) P(t k | t k )] 10. end 11. End 12. Termination and path-readout.

7 Unknown Words Some tags are more common than others (for example a new word can be most likely verbs, nouns etc. but not prepositions or articles). Use features of the word (morphological and other cues, for example words ending in –ed are likely to be past tense forms or past participles). Use context.

8 Hidden Markov Model Taggers Often a large tagged training set is not available. We can use an HMM to learn the regularities of tag sequences in this case. The states of the HMM correspond to the tags and the output alphabet consists of words in dictionary or classes of words.

9 Initialization of HMM Tagger (Jelinek, 1985) Output alphabet consists of words. Emission probabilities are given by: b j.l = b j *.l C(w l ) _____________ S w m b j *.m C(w m ) b j *.l = 0 if t j is not a part of speech allowed for w l 1Otherwise, where T(w l ) is the number of tags allowed for w l T(w l ) ___ the sum is over all words w m in the dictionary

10 Initialization (Cont.) (Kupiec, 1992) Output alphabet consists of word equivalence classes, i.e., metawords u L, where L is a subset of the integers from 1 to T, where T is the number of different tags in the tag set) b j.L = b j *.l C(u L ) _____________ S u L’ b j *.L’ C(u L’ ) b j *.L = 0 if j is not in L 1Otherwise, where |L| is the number of indices in L. |L| ___ the sum in the denom. is over all the metawords u L’

11 Training the HMM Once the initialization is completed, the HMM is trained using the Forward-Backward algorithm. Use the Viterbi algorithm. Tagging using the HMM

12 Transformation-Based Tagging Exploits wider range of lexical and syntactic regularities. Condition the tags on preceding words not just preceding tags. Use more context than bigram or trigram.

13 Transformations A transformation consists of two parts, a triggering environment and a rewrite rule. Examples of some transformations learned in transformation- based tagging Source tag Target tag triggering environment NN VB previous tag is TO VBP VB one of the previous three tags is MD JJR RBR next tag is JJ VBP VB one of the previous two words is n’t

14 Learning Algorithm The learning algorithm selects the best transformations and determines their order of application Initially tag each word with its most frequent tag. Iteratively we choose the transformation that reduces the error rate most. We stop when there is no transformation left that reduces the error rate by more than a prespecified threshold.

15 Tagging Accuracy Ranges from 95%-97% Depends on: Amount of training data available. The tag set. Difference between training corpus and dictionary and the corpus of application. Unknown words in the corpus of application.

16 Applications of Tagging Partial parsing : syntactic analysis Information Extraction : tagging and partial parsing help identify useful terms and relationships between them. Information Retrieval : noun phrase recognition and query-document matching based on meaningful units rather than individual terms. Question Answering : analyzing a query to understand what type of entity the user is looking for and how it is related to other noun phrases mentioned in the question.


Download ppt "Ch 10 Part-of-Speech Tagging Edited from: L. Venkata Subramaniam February 28, 2002."

Similar presentations


Ads by Google