Presentation is loading. Please wait.

Presentation is loading. Please wait.

December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)

Similar presentations


Presentation on theme: "December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)"— Presentation transcript:

1 December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)

2 December 2004CSA3050: PCFGs2 Handling Ambiguities The Earley Algorithm is equipped to represent ambiguities efficiently but not to resolve them. Methods available for resolving ambiguities include: –Semantics (choose parse that makes sense). –Statistics: (choose parse that is most likely). Probabilistic context-free grammars (PCFGs) offer a solution.

3 December 2004CSA3050: PCFGs3 PCFG A PCFG is a 5-tuple (NT,T,P,S,D) where D is a function that assigns probabilities to each rule p  P A PCFG augments each rule with a conditional probability. A  [p] Formally this is the probability of a given expansion given LHS non-terminal.

4 December 2004CSA3050: PCFGs4 Example PCFG

5 December 2004CSA3050: PCFGs5 Example PCFG Fragment S  NP VP [.80] S  Aux NP VP [.15] S  VP [.05] Sum of conditional probabilities for a given A  NT = 1 PCFG can be used to estimate probabilities of each parse-tree for sentence S.

6 December 2004CSA3050: PCFGs6 Probability of a Parse Tree For sentence S, the probability assigned by a PCFG to a parse tree T is given by  P(r(n)) where n is a node of T and r(n) is the production rule that produced n i.e. the product of the probabilities of all the rules r used to expand each node n in T. nTnT

7 CSA3050: PCFGs Ambiguous Sentence P(TL)= 1.5 x 10 -6 P(TR)= 1.7 x 10 -6 P(S) = 3.2 x 10 -6

8 December 2004CSA3050: PCFGs8 The Parsing Problem for PCFGs The parsing problem for PCFGs is to produce the most likely parse for a given sentence, i.e. to compute the T spanning the sentence whose probability is maximal. CYK algorithm assumes that grammar is in Chomsky Normal Form: –No  productions –Rules of the form A  B C or A  a

9 December 2004CSA3050: PCFGs9 CKY Algorithm – Base Case Base case: covering input strings with of length 1 (i.e. individual words). In CNF, probability p has to come from that of corresponding rule A  w [p]

10 December 2004CSA3050: PCFGs10 CKY Algorithm Recursive Case Recursive case: input strings of length > 1: A  * w ij if and only if there is a rule A  B C and some 1 ≤ k ≤ j such that B derives w ik and C derives w kj In this case P(w ij ) is obtained by multiplying together P(w ik ) and P(w jk ). These probabilities in other parts of table Take max value

11 December 2004CSA3050: PCFGs11 Probabilistic CKY Algorithm for Sentence of Length n 1.for k := 1 to n do 2. π[k-1,k,A] := P(A → w k ) 3. for i := k-2 downto 0 do 4. for j := k-1 downto i+1 do 5. π[i,j,A] := max [π[i,j,B] * π[j,k,C] * P(A → BC) ] for each A → BC  G 6.return max[π(0,n,S)]

12 December 2004CSA3050: PCFGs12  w  w  w  w  w  w   i  k i+1  k-1   j

13 December 2004CSA3050: PCFGs13 Probabilistic Earley Non Probabilistic Completer Procedure Completer( (B -> Z., [j,k]) ) for each (A -> X. B Y, [i,j]) in chart[j] do enqueue( (A -> X B. Y, [i,k]), chart[k] ) Probabilistic Completer Procedure Completer( (B -> Z., [j,k],Pjk) ) for each (A -> X. B Y, [i,j],Pij) in chart[j] do enqueue( (A -> X B. Y, [i,k], Pij*Pjk), chart[k] )

14 December 2004CSA3050: PCFGs14 Discovery of Probabilities Normal Rules Use a corpus of already parsed sentences. Example: Penn Treebank (Marcus et al 1993) –parse trees for 1M word Brown Corpus –skeleton parsing: partial parse, leaving out the “hard” things (such as PP-attachment) Parse corpus and take statistics. Has to account for ambiguity. P(α→β|α) = C(α→β)/ΣC(α→γ) = C(α→β)/C(α)

15 December 2004CSA3050: PCFGs15 Penn Treebank Example 1 ((S (NP (NP Pierre Vinken), (ADJP (NP 61 years) old, )) will (VP join (NP the board) (PP as (NP a nonexecutive director)) (NP Nov 29))).)

16 December 2004CSA3050: PCFGs16 Penn Treebank – Example 2 ( (S (NP (DT The) (NNP Fulton) (NNP County) (NNP Grand) (NNP Jury) ) (VP (VBD said) (NP (NNP Friday) ) (SBAR (-NONE- 0) (S (NP (DT an) (NN investigation) (PP (IN of) (NP (NP (NNP Atlanta) ) (POS 's) (JJ recent) (JJ primary) (NN election) ))) (VP (VBD produced) (NP (OQUOTE OQUOTE) (DT no) (NN evidence) (CQUOTE CQUOTE) (SBAR (IN that) (S (NP (DT any) (NNS irregularities) ) (VP (VBD took) (NP (NN place) )))))))))) (PERIOD PERIOD) )

17 December 2004CSA3050: PCFGs17 Problems with PCFGs Fundamental Independence Assumptions: A CFG assumes that the expansion of any one non-terminal is independent of the expansion of any other non-terminal. Hence rule probabilities are always multiplied together. The FIA is not always realistic, however.

18 December 2004CSA3050: PCFGs18 Problems with PCFGs Difficulty in representing dependencies between parse tree nodes. –Structural Dependencies between the expansion of a node N and anything above N in the parse tree, such as M –Lexical Dependencies between the expansion of a node N and occurrences of particular words in text segments dominated by N.

19 December 2004CSA3050: PCFGs19 Tree Dependencies M p,s N q,r pqrs

20 December 2004CSA3050: PCFGs20 Structural Dependency By examination of text corpora, it has been shown (Kuno 1972) that there is a strong tendency (c. 2:1) in English and other languages for subject of a sentence to be a pronoun: –She's able to take her baby to work versus –Joanna worked until she had a family whilst the object tends to be a non-pronoun –All the people signed the confessions versus –Some laws prohibit it

21 December 2004CSA3050: PCFGs21 Expansion sometimes depends on ancestor nodes S Pron NP N hesawMr. Bush VP subject object

22 December 2004CSA3050: PCFGs22 Dependencies cannot be stated These dependencies could be captured if it were possible to say that the probabilities associated with, e.g. NP → Pron NP → N depend whether NP is subject or object. However, this cannot normally be said in a standard PCFG.

23 December 2004CSA3050: PCFGs23 Lexical Dependencies Consider sentence: "Moscow sent soldiers into Afghanistan." Suppose grammar includes NP → NP PP VP → VP PP There will typically be 2 parse trees.

24 December 2004CSA3050: PCFGs24 PP Attachment Ambiguity N V N P N Moscow sent soldiers into Afghanistan VP NP NP NP VP VP PP S NP VP VP NP S N V N P N Moscow sent soldiers into Afghanistan NP PP 67% of PPs attach to NPs33% of PPs attach to VPs

25 December 2004CSA3050: PCFGs25 PP Attachment Ambiguity N V N P N Moscow sent soldiers from Afghanistan VP NP NP NP VP VP PP S NP VP VP NP S N V N P N Moscow sent soldiers from Afghanistan NP PP 67% of PPs attach to NPs33% of PPs attach to VPs

26 December 2004CSA3050: PCFGs26 Lexical Properties Raw statistics on the use of these two rules suggest that NP → NP PP (67 %) VP → VP PP (33 % ) In this case the raw statistics are misleading and yield the wrong conclusion. The correct parse should be decided on the basis of the lexical properties of the verb "send into" alone, since we know that the basic pattern for this verb is (NP) send (NP) (PP into ) where the PP into attaches to the VP.

27 December 2004CSA3050: PCFGs27 Lexicalised PCFGs Basic idea: each syntactic constituent is associated with a head which is a single word. Each non-terminal in a parse tree is annotated with that single word. Michael Collins (1999) Head Driven Statistical Models for NL Parsing PhD Thesis (see author’s website).

28 December 2004CSA3050: PCFGs28 Lexicalised Tree

29 December 2004CSA3050: PCFGs29 Generating Lexicalised Parse Trees To generate such a tree, each rule must identify exactly one right hand side constituent to be the head daughter. Then the headword of a node is inherited from the headword of the head daughter. In the case of a lexical item, the head is clearly itself (though the word might undergo minor inflectional modification).

30 December 2004CSA3050: PCFGs30 Finding the Head Constituent In some cases this is very easy, e.g. NP[N] → Det N(the man) VP[V] → V NP (... asked John) In other cases it isn't PP[?] → P NP (to London) Many modern linguistic theories include a component that define what heads are.

31 December 2004CSA3050: PCFGs31 Discovery of Probabilities Lexicalised Rules Need to establish individual probabilities of, e.g. VP(dumped) → V(dumped) NP(sacks) PP(into) VP(dumped) → V(dumped) NP(cats) PP(into) VP(dumped) → V(dumped) NP(hats) PP(into) VP(dumped) → V(dumped) NP(sacks) PP(above) Problem – no corpus big enough to train with this number of rules (nearly all the rules would have zero counts). Need to make independence assumptions that allow counts to be clustered. Which independence assumptions?

32 December 2004CSA3050: PCFGs32 Charniak’s (1997) Approach Normal PCFG: probability is conditioned only on syntactic category, i.e. p(r(n)|c(n)) Charniak’s also conditioned the probability of a given rule expansion on the head of the non- terminal. p(r(n)|c(n),h(n)) N.B. This approach would pool the statistics of all individual rules on previous slide together, i.e. as VP(dumped) → V NP PP

33 December 2004CSA3050: PCFGs33 Probability of Head Now that we have added heads as a conditioning factor, we must also decide how to compute the probability of a head. The null assumption, that all heads are equally probable, is unrealistic (different verbs have different frequencies of occurrence). Charniak therefore adopted a better assumption: probability of a node n having head h depends on –syntactic category of n and –head of n’s mother.

34 December 2004CSA3050: PCFGs34 Including Head of Mother So instead of equal probabilities for all heads, we have p(h(n) = word i | c(n), h(m(n))) Relating this to the circled node in our previous figure, we have p(h(n)=sacks| c(n)=NP, h(m(n))=dumped)

35 December 2004CSA3050: PCFGs35 Probability of a Complete Parse STANDARD PCFG For sentence S, the probability assigned by a PCFG to a parse tree T was given by  P(r(n)) where n is a node of T and r(n) is the production rule that produced n HEAD DRIVEN PCFG To include probability of complete parse P(T) =  p(r(n)|c(n),h(n)) * p(h(n)|c(n), h(m(n))) nTnT nTnT

36 December 2004CSA3050: PCFGs36 Evaluating Parsers Let A = # correct constituents in candidate parse B = # correct constituents in treebank parse C = # total constituents in candidate parse Labelled Recall = A/B Labelled Precision = A/C


Download ppt "December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)"

Similar presentations


Ads by Google