Presentation is loading. Please wait.

Presentation is loading. Please wait.

Boosting Applied to Tagging and PP Attachment By Aviad Barzilai.

Similar presentations


Presentation on theme: "Boosting Applied to Tagging and PP Attachment By Aviad Barzilai."— Presentation transcript:

1 Boosting Applied to Tagging and PP Attachment By Aviad Barzilai

2 Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment

3 By Aviad Barzilai Introduction to AdaBoost Binary classification (introduction to AdaBoost) Weak classifier - A classifier better than chance Strong classifier – A classifier with low error rates Weak classifiers can be easier to find

4 By Aviad Barzilai Introduction to AdaBoost Binary classification (introduction to AdaBoost) X – Examples Y – Labels ({-1, 1}) D – Weights H(x) – Classifiers Goal – Building a strong classifier using weak classifiers

5 By Aviad Barzilai Introduction to AdaBoost Binary classification (introduction to AdaBoost) Update the weights to reflect the errors Find the “best” weak classifier REPEAT! *A formal definition of “best” will be given later

6 By Aviad Barzilai Introduction to AdaBoost Binary classification (introduction to AdaBoost) REPEAT! Update the weights to reflect the errors Find the “best” weak classifier *A formal definition of “best” will be given later

7 By Aviad Barzilai Introduction to AdaBoost Binary classification (introduction to AdaBoost) REPEAT! Update the weights to reflect the errors Find the “best” weak classifier *A formal definition of “best” will be given later

8 By Aviad Barzilai Introduction to AdaBoost Binary classification (introduction to AdaBoost) Create a strong classifier ++

9 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment

10 By Aviad Barzilai AdaBoost & NLP Introduction to AdaBoost AdaBoost & NLP The idea of boosting is to combine many simple “rules of thumb”, such as “the current word is a noun if the previous word is the.” Such rules often give incorrect classifications. The main idea of boosting is to combine many such rules in a principled manner to produce a single highly accurate classification rule. AdaBoost & NLP The “rules of thumb”, (e.g. “the current word is a noun if the previous word is the.”) are called weak hypotheses.

11 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP h t maps each example x to a real number h t (x). The sign of this number is interpreted as the predicted class (-1 or +1) of example x. The magnitude |h t (x)| is interpreted as the level of confidence in the prediction. X ±confidence Weak hypothesis (h t ) “the current word is a noun if the previous word is the.” Weak hypothesis (h t ) “the current word is a noun if the previous word is the.”

12 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP AdaBoost outputs a final hypothesis which make predictions using a simple vote of the weak hypotheses’ predictions, taking into account the varying confidences of the different predictions.

13 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Training set Importance Weights (initially uniform) Get a weak hypothesis Update the Importance Weights Repeat “In our experiments, we used cross validation to choose the number of rounds T” Repeat “In our experiments, we used cross validation to choose the number of rounds T” Final Hypothesis

14 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP For each hypothesis W b,s is the sum of weights of examples for which y i =s and the result of the predicate is b. Choose “best” hypothesis (minimal error) Choose “best” hypothesis (minimal error)

15 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Our default approach to multiclass problems is to use Schapire and Singer’s (1998) AdaBoost.MH algorithm. The main idea of this algorithm is to regard each example with its multiclass label as several binary- labeled examples. Suppose that the possible classes are 1,…,k. We map each original example x with label y to k binary labeled derived examples (x; 1),…,(x; k) where example (x; c) is labeled +1 if c = y and -1 otherwise.

16 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment

17 By Aviad Barzilai Tagging (with AdaBoost) Tagging & PP Attachment Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment “In corpus linguistics, part-of-speech tagging is the process of marking up the words in a text (corpus) as corresponding to a particular part of speech, based on both its definition, as well as its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc.”

18 By Aviad Barzilai Tagging (with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The authors used the UPenn Treebank corpus (Marcus et al., 1993). The corpus uses 80 labels, which comprise 45 parts of speech properly so- called, and 35 indeterminate tags, representing annotator uncertainty. They introduced an 81st label, ##, for paragraph separators.

19 By Aviad Barzilai Tagging (with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The straightforward way of applying boosting to tagging is to use AdaBoost.MH. Each word token represents an example, and the classes are the 81 part-of-speech tags.

20 By Aviad Barzilai Tagging (with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Weak hypotheses are identified with “attribute=value” predicates. The article uses three types of attributes: Lexical attributes, Contextual attributes and Morphological attributes

21 By Aviad Barzilai Tagging (with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Lexical attributes: The current word as a downcased string (S); its capitalization (C); and its most-frequent tag in training (T). Contextual attributes: The string (LS), capitalization (LC), and most-frequent tag (LT) of the preceding word and similarly for the following word (RS;RC;RT). Morphological attributes: The inflectional suffix (I) of the current word, as provided by an automatic stemmer; also the last two (S2) and last three (S3) letters of the current word.

22 By Aviad Barzilai Tagging (with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The authors conducted four experiments of tagging the UPenn Treebank corpus using AdaBoost

23 By Aviad Barzilai Tagging (with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The 3.28% error rate is not significantly different (at p = 0.05) from the error rate of the best-known single tagger, Ratnaparkhi’s Maxent tagger, which achieves 3.11% error on our data. The results are not as good as those achieved by Brill and Wu’s voting scheme yet the experiments described in the article use very simple features.

24 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment

25 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment “According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the default” Tagging (with MaxEnt)

26 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment In MaxEnt we “model all that is known and assume nothing about what is unknown”. Model all that is known: Satisfy a set of constraints that must hold Assume nothing about what is unknown Choose the most “uniform” distribution i.e choose the one with maximum entropy Tagging (with MaxEnt)

27 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Tagging (with MaxEnt) Feature (a.k.a. feature function, Indicator function) is a binary-valued function on events: A: the set of possible classes B: space of contexts (e.g. neighboring words/ tags)

28 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Tagging (with MaxEnt)

29 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Tagging (with MaxEnt) Observed expectation (using the Observed probability ) Observed expectation (using the Observed probability ) Model expectation (using the model’s probability) The task: find p* constraints The solution is obtained iteratively

30 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Tagging (with MaxEnt) “The 3.28% error rate is not significantly different (at p = 0.05) from the error rate of the best-known single tagger, Ratnaparkhi’s Maxent tagger, which achieves 3.11% error on our data.”

31 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment “In grammar, a preposition is a part of speech that introduces a prepositional phrase. For example, in the sentence ‘The cat sleeps on the sofa’, the word ‘on’ is a preposition, introducing the prepositional phrase ‘on the sofa’. Simply put, a preposition indicates a relation between things mentioned in a sentence.”

32 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Example of PP attachment “Congress accused the president of peccadillos” is classified according to the attachment site of the prepositional phrase. Attachment to N: accused [the president of peccadillos] Attachment to V: accused [the president] [of peccadillos]

33 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The cases of PP-attachment that the article addresses define a binary classification problem. Examples have binary labels: positive represents attachment to noun, and negative represents attachment to verb.

34 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The authors used the same training and test data as Collins and Brooks (1995).

35 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment Each PP-attachment example is represented by its value for four attributes: the main verb (V ), the head word of the direct object (N1), the preposition (P), and the head word of the object of the preposition (N2). For instance, in example before, V = accused, N1 = president, P = of, and N2 = peccadillos. Examples have binary labels: positive represents attachment to noun, and negative represents attachment to verb.

36 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment The weak hypotheses used correspond to “attribute=value” predicates and conjunctions thereof. There are 16 predicates that are considered for each example. For the previous example, three of these 16 predicates are (V = accused ^ N1 = president ^ N2 = peccadillos), (P = with), and (V = accused ^ P = of).

37 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment

38 By Aviad Barzilai PP attachment (Prepositional phrase attachment with AdaBoost) Introduction to AdaBoost AdaBoost & NLP Tagging & PP Attachment After 20,000 rounds of boosting the test error was down to 14.6 ± 0.6%. This is indistinguishable from the best known results for this problem, namely, 14.5 ± 0.6%, reported by Collins and Brook on exactly the same data.

39 FIN

40 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP

41 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP The article uses very simple weak hypotheses that test the value of a Boolean predicate and make a prediction based on that value. The predicates used are of the form “a = v”, for a an attribute and v a value; for example, “PreviousWord = the”. If, on a given example x, the predicate holds, the weak hypothesis outputs prediction p 1, otherwise p 0

42 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Schapire and Singer (1998) prove that the training error of the final hypothesis is at most This suggests that the training error can be greedily driven down by designing a weak learner which, on round t of boosting, attempts to find a weak hypothesis h that minimizes Z

43 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Given a predicate (e.g. “PreviousWord = the”), we choose p 0 and p 1 to minimize Z Schapire and Singer (1998) show that Z is minimized when W b,s is the sum of weights of examples for which y i =s and the result of the predicate is b.

44 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP In practice, very large values of p 0 and p 1 can cause numerical problems and may also lead to overfitting. Therefore, we usually “smooth” these values

45 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP For each hypothesis W b,s is the sum of weights of examples for which y i =s and the result of the predicate is b. Choose “best” hypothesis (max confidence) Choose “best” hypothesis (max confidence)

46 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP

47 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP Our default approach to multiclass problems is to use Schapire and Singer’s (1998) AdaBoost.MH algorithm. The main idea of this algorithm is to regard each example with its multiclass label as several binary- labeled examples. Suppose that the possible classes are 1,…,k. We map each original example x with label y to k binary labeled derived examples (x; 1),…,(x; k) where example (x; c) is labeled +1 if c = y and -1 otherwise.

48 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP We maintain a distribution over pairs (x; c), treating each such as a separate example. Weak hypotheses are identified with predicates over (x; c) pairs, though they now ignore c. The prediction weights p c,0 and p c,1, however, are chosen separately for each class c.

49 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP An alternative is the to use binary AdaBoost to train separate discriminators (binary classifiers) for each class, and combine their output by choosing the class c that maximizes f c (x), where f c (x) is the final confidence weighted prediction of the discriminator for class c. The above is called AdaBoost.MI (multiclass, independent discriminators)

50 By Aviad Barzilai Introduction to AdaBoost AdaBoost & NLP AdaBoost.MI differs from AdaBoost.MH in that predicates are selected independently for each class; we do not require that the weak hypothesis at round t be the same for all classes.


Download ppt "Boosting Applied to Tagging and PP Attachment By Aviad Barzilai."

Similar presentations


Ads by Google