Presentation is loading. Please wait.

Presentation is loading. Please wait.

Text Models. Why? To “understand” text To assist in text search & ranking For autocompletion Part of Speech Tagging.

Similar presentations


Presentation on theme: "Text Models. Why? To “understand” text To assist in text search & ranking For autocompletion Part of Speech Tagging."— Presentation transcript:

1 Text Models

2 Why? To “understand” text To assist in text search & ranking For autocompletion Part of Speech Tagging

3 Simple application: spelling suggestions Say that we have a dictionary of words – Real dictionary or the result of crawling – Sentences instead of words Now we are given a word w not in the dictionary How can we correct it to something in the dictionary

4 String editing Given two strings (sequences) the “distance” between the two strings is defined by the minimum number of “character edit operations” needed to turn one sequence into the other. Edit operations: delete, insert, modify (a character) – Cost assigned to each operation (e.g. uniform =1 )

5 Edit distance Already a simple model for languages Modeling the creation of strings (and errors in them) through simple edit operations

6 Distance between strings Edit distance between strings = minimum number of edit operations that can be used to get from one string to the other – Symmetric because of the particular choice of edit operations and uniform cost distance(“Willliam Cohon”,“William Cohen”) 2

7 Finding the edit distance An “alignment” problem Deciding how to align the two strings Can we try all alignments? How many (reasonable options) are there?

8 Dynamic Programming An umbrella name for a collection of algorithms Main idea: reuse computation for sub- problems, combined in different ways

9 Example: Fibonnaci if n = 0 or n = 1 return n else return fib(n-1) + fib(n-2) Exponential time!

10 Fib with Dynamic Programming table = {} def fib(n): global table if table.has_key(n): return table[n] if n == 0 or n == 1: table[n] = n return n else: value = fib(n-1) + fib(n-2) table[n] = value return value

11

12 Using a partial solution Partial solution: – Alignment of s up to location i, with t up to location j How to reuse? Try all options for the “last” operation

13

14

15 Base case : D(i,0)=I, D(0,i)=i for i inserts \ deletions Easy to generalize to arbitrary cost functions!

16 Models Bag-of-words N-grams Hidden Markov Models Probabilistic Context Free Grammar

17 Bag-of-words Every document is represented as a bag of the words it contains Bag means that we keep the multiplicity (=number of occurrences) of each word Very simple, but we lose all track of structure

18 n-grams Limited structure Sliding window of n words

19 n-gram model

20 How would we infer the probabilities? Issues: – Overfitting – Probability 0

21 How would we infer the probabilities? Maximum Likelihood:

22 "add-one" (Laplace) smoothing V = Vocabulary size

23 Good-Turing Estimate

24 Good-Turing

25 More than a fixed n.. Linear Interpolation

26 Precision vs. Recall

27 Richer Models HMM PCFG

28 Motivation: Part-of-Speech Tagging – Useful for ranking – For machine translation – Word-Sense Disambiguation – …

29 Part-of-Speech Tagging Tag this word. This word is a tag. He dogs like a flea The can is in the fridge The sailor dogs me every day

30 A Learning Problem Training set: tagged corpus – Most famous is the Brown Corpus with about 1M words – The goal is to learn a model from the training set, and then perform tagging of untagged text – Performance tested on a test-set

31 Simple Algorithm Assign to each word its most popular tag in the training set Problem: Ignores context Dogs, tag will always be tagged as a noun… Can will be tagged as a verb Still, achieves around 80% correctness for real-life test-sets – Goes up to as high as 90% when combined with some simple rules

32 (HMM) Hidden Markov Model Model: sentences are generated by a probabilistic process In particular, a Markov Chain whose states correspond to Parts-of-Speech Transitions are probabilistic In each state a word is outputted – The output word is again chosen probabilistically based on the state

33 HMM HMM is: – A set of N states – A set of M symbols (words) – A matrix NXN of transition probabilities Ptrans – A vector of size N of initial state probabilities Pstart – A matrix NXM of emissions probabilities Pout “Hidden” because we see only the outputs, not the sequence of states traversed

34 Example

35 3 Fundamental Problems 1) Compute the probability of a given observation Sequence (=sentence) 2) Given an observation sequence, find the most likely hidden state sequence This is tagging 3) Given a training set find the model that would make the observations most likely

36 Tagging Find the most likely sequence of states that led to an observed output sequence Problem: exponentially many possible sequences!

37 Viterbi Algorithm Dynamic Programming V t,k is the probability of the most probable state sequence – Generating the first t + 1 observations (X0,..Xt) – And terminating at state k

38 Viterbi Algorithm Dynamic Programming V t,k is the probability of the most probable state sequence – Generating the first t + 1 observations (X0,..Xt) – And terminating at state k V 0,k = Pstart(k)*Pout(k,X 0 ) V t,k = Pout(k,X t )*max{V t-1k’ *Ptrans(k’,k)}

39 Finding the path Note that we are interested in the most likely path, not only in its probability So we need to keep track at each point of the argmax – Combine them to form a sequence What about top-k?

40 Complexity O(T*|S|^2) Where T is the sequence (=sentence) length, |S| is the number of states (= number of possible tags)

41 Computing the probability of a sequence Forward probabilities: α t (k) is the probability of seeing the sequence X 1 …X t and terminating at state k Backward probabilities: β t (k) is the probability of seeing the sequence X t+1 …X n given that the Markov process is at state k at time t.

42 Computing the probabilities Forward algorithm α 0 (k)= Pstart(k)*Pout(k,X 0 ) α t (k)= Pout(k,X t )*Σ k’ {α t-1k’ *Ptrans(k’,k)} P(O 1,… On )= Σ k α n (k) Backward algorithm β t (k) = P(O t+1 …O n | state at time t is k) β t (k) = Σ k’ {Ptrans(k,k’)* Pout(k’,X t+1 )* β t+1 (k’)} β n (k) = 1 for all k P(O)= Σ k β 0 (k)* Pstart(k)

43 Learning the HMM probabilities Expectation-Maximization Algorithm 1.Start with initial probabilities 2.Compute Eij the expected number of transitions from i to j while generating a sequence, for each i,j (see next) 3.Set the probability of transition from i to j to be Eij/ (Σ k Eik) 4. Similarly for omission probability 5. Repeat 2-4 using the new model, until convergence

44 Estimating the expectancies By sampling – Re-run a random a execution of the model 100 times – Count transitions By analysis – Use Bayes rule on the formula for sequence probability – Called the Forward-backward algorithm

45 Accuracy Tested experimentally Exceeds 96% for the Brown corpus – Trained on half and tested on the other half Compare with the 80-90% by the trivial algorithm The hard cases are few but are very hard..

46 NLTK http://www.nltk.org/ Natrual Language ToolKit Open source python modules for NLP tasks – Including stemming, POS tagging and much more

47 Context Free Grammars Context Free Grammars are a more natural model for Natural Language Syntax rules are very easy to formulate using CFGs Provably more expressive than Finite State Machines – E.g. Can check for balanced parentheses

48 Context Free Grammars Non-terminals Terminals Production rules – V → w where V is a non-terminal and w is a sequence of terminals and non-terminals

49 Context Free Grammars Can be used as acceptors Can be used as a generative model Similarly to the case of Finite State Machines How long can a string generated by a CFG be?

50 Stochastic Context Free Grammar Non-terminals Terminals Production rules associated with probability – V → w where V is a non-terminal and w is a sequence of terminals and non-terminals

51 Chomsky Normal Form Every rule is of the form V → V1V2 where V,V1,V2 are non-terminals V → t where V is a non-terminal and t is a terminal Every (S)CFG can be written in this form Makes designing many algorithms easier

52 Questions What is the probability of a string? – Defined as the sum of probabilities of all possible derivations of the string Given a string, what is its most likely derivation? – Called also the Viterbi derivation or parse – Easy adaptation of the Viterbi Algorithm for HMMs Given a training corpus, and a CFG (no probabilities) learn the probabilities on derivation rule

53 Inside probability: probability of generating w p …w q from non-terminal N j. Outside probability: total prob of beginning with the start symbol N 1 and generating and everything outside w p …w q Inside-outside probabilities

54 CYK algorithm NjNj NrNr NsNs wpwp wdwd W d+1 wqwq

55 CYK algorithm NjNj NgNg wpwp wqwq W q+1 wewe NfNf N1N1 w1w1 wmwm

56 CYK algorithm NgNg NjNj wewe W p-1 WpWp wqwq NfNf N1N1 w1w1 wmwm

57 Outside probability

58 Probability of a sentence

59 The probability that a binary rule is used (1)

60 The probability that N j is used (2)

61

62 The probability that a unary rule is used (3)

63 Multiple training sentences (1) (2)


Download ppt "Text Models. Why? To “understand” text To assist in text search & ranking For autocompletion Part of Speech Tagging."

Similar presentations


Ads by Google