Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical NLP Winter 2009 Lecture 11: Parsing II Roger Levy Thanks to Jason Eisner & Dan Klein for slides.

Similar presentations


Presentation on theme: "Statistical NLP Winter 2009 Lecture 11: Parsing II Roger Levy Thanks to Jason Eisner & Dan Klein for slides."— Presentation transcript:

1 Statistical NLP Winter 2009 Lecture 11: Parsing II Roger Levy Thanks to Jason Eisner & Dan Klein for slides

2 PCFGs as language models What does the goal weight (neg. log-prob) represent? It is the probability of the most probable tree whose yield is the sentence Suppose we want to do language modeling We are interested in the probability of all trees time 1 flies 2 like 3 an 4 arrow 5 NP 3 Vst 3 NP 1 0 S8 NP 2 4 S NP4 VP4 NP 1 8 S 2 1 VP P2V5P2V5 PP 1 2 VP Det1NP N8N8 Put the file in the folder vs. Put the file and the folder

3 time 1 flies 2 like 3 an 4 arrow 5 0 NP3 Vst3 NP10 S8 S13 NP24 S22 S27 NP24 S27 S22 S27 1 NP4 VP4 NP18 S21 VP18 2 P2V5P2V5 PP12 VP16 3 Det1NP10 4 N8N8 1 S NP VP 6 S Vst NP 2 S S PP 1 VP V NP 2 VP VP PP 1 NP Det N 2 NP NP PP 3 NP NP NP 0 PP P NP Could just add up the parse probabilities oops, back to finding exponentially many parses

4 time 1 flies 2 like 3 an 4 arrow 5 0 NP3 Vst3 NP10 S S NP24 S22 S27 NP24 S27 S 1 NP4 VP4 NP18 S21 VP18 2 P2V5P2V5 PP VP16 3 Det1NP10 4 N8N8 1 S NP VP 6 S Vst NP 2 -2 S S PP 1 VP V NP 2 VP VP PP 1 NP Det N 2 NP NP PP 3 NP NP NP 0 PP P NP Any more efficient way?

5 time 1 flies 2 like 3 an 4 arrow 5 0 NP3 Vst3 NP10 S NP24 S22 S27 NP24 S27 S 1 NP4 VP4 NP18 S21 VP18 2 P2V5P2V5 PP VP16 3 Det1NP10 4 N8N8 1 S NP VP 6 S Vst NP 2 -2 S S PP 1 VP V NP 2 VP VP PP 1 NP Det N 2 NP NP PP 3 NP NP NP 0 PP P NP Add as we go … (the inside algorithm)

6 time 1 flies 2 like 3 an 4 arrow 5 0 NP3 Vst3 NP10 S NP S 1 NP4 VP4 NP18 S21 VP18 2 P2V5P2V5 PP VP16 3 Det1NP10 4 N8N8 1 S NP VP 6 S Vst NP 2 -2 S S PP 1 VP V NP 2 VP VP PP 1 NP Det N 2 NP NP PP 3 NP NP NP 0 PP P NP Add as we go … (the inside algorithm)

7 Charts and lattices You can equivalently represent a parse chart as a lattice constructed over some initial arcs This will also set the stage for Earley parsing later saltfliesscratch NP N NP S S VP V N NP S VP NP N NP V VP salt N NP V VP fliesscratch N NP V VPN NP NP S NP VP S S S NP VP VP V NP VP V NP N NP N N

8 (Speech) Lattices There was nothing magical about words spanning exactly one position. When working with speech, we generally dont know how many words there are, or where they break. We can represent the possibilities as a lattice and parse these just as easily. I awe of van eyes saw a ve an Ivan

9 Speech parsing mini-example Grammar (negative log-probs): [Well do it on the board if theres time] 1 S NP VP 1 VP V NP 2 VP V PP 2 NP DT NN 2 NP DT NNS 3 NP NNP 2 NP PRP 0 PP IN NP 1 PRP I 9 NNP Ivan 6 V saw 4 V ve 7 V awe 2 DT a 2 DT an 3 IN of 6 NNP eyes 9 NN awe 6 NN saw 5 NN van

10 Better parsing Weve now studied how to do correct parsing This is the problem of inference given a model The other half of the question is how to estimate a good model Thats what well spend the rest of the day on

11 Problems with PCFGs? If we do no annotation, these trees differ only in one rule: VP VP PP NP NP PP Parse will go one way or the other, regardless of words Well look at two ways to address this: Sensitivity to specific words through lexicalization Sensitivity to structural configuration with unlexicalized methods

12 Problems with PCFGs? [insert performance statistics for a vanilla parser]

13 Problems with PCFGs Whats different between basic PCFG scores here? What (lexical) correlations need to be scored?

14 Problems with PCFGs Another example of PCFG indifference Left structure far more common How to model this? Really structural: chicken with potatoes with gravy Lexical parsers model this effect, though not by virtue of being lexical

15 PCFGs and Independence Symbols in a PCFG define conditional independence assumptions: At any node, the material inside that node is independent of the material outside that node, given the label of that node. Any information that statistically connects behavior inside and outside a node must flow through that node. NP S VP S NP VP NP DT NN NP

16 Solution(s) to problems with PCFGs Two common solutions seen for PCFG badness: 1.Lexicalization: put head-word information into categories, and use it to condition rewrite probabilities 2.State-splitting: distinguish sub-instances of more general categories (e.g., NP into NP-under-S vs. NP- under-VP) You can probably see that (1) is a special case of (2) More generally, the solution involves information propagation through PCFG nodes

17 Lexicalized Trees Add headwords to each phrasal node Syntactic vs. semantic heads Headship not in (most) treebanks Usually use head rules, e.g.: NP: Take leftmost NP Take rightmost N* Take rightmost JJ Take right child VP: Take leftmost VB* Take leftmost VP Take left child How is this information propagation?

18 Lexicalized PCFGs? Problem: we now have to estimate probabilities like Never going to get these atomically off of a treebank Solution: break up derivation into smaller steps

19 Lexical Derivation Steps Simple derivation of a local tree [simplified Charniak 97] VP[saw] VBD[saw] NP[her] NP[today] PP[on] VBD[saw] (VP->VBD )[saw] NP[today] (VP->VBD...NP )[saw] NP[her] (VP->VBD...NP )[saw] (VP->VBD...PP )[saw] PP[on] VP[saw] Still have to smooth with mono- and non- lexical backoffs Its markovization again!

20 Lexical Derivation Steps Another derivation of a local tree [Collins 99] Choose a head tag and word Choose a complement bag Generate children (incl. adjuncts) Recursively derive children

21 Naïve Lexicalized Parsing Can, in principle, use CKY on lexicalized PCFGs O(Rn 3 ) time and O(Sn 2 ) memory But R = rV 2 and S = sV Result is completely impractical (why?) Memory: 10K rules * 50K words * (40 words) 2 * 8 bytes 6TB Can modify CKY to exploit lexical sparsity Lexicalized symbols are a base grammar symbol and a pointer into the input sentence, not any arbitrary word Result: O(rn 5 ) time, O(sn 3 ) Memory: 10K rules * (40 words) 3 * 8 bytes 5GB Now, why do we get these space & time complexities?

22 Another view of CKY The fundamental operation is edge-combining bestScore(X,i,j,h) if (j = i+1) return tagScore(X,s[i]) else return max score(X -> Y Z) * bestScore(Y,i,k) * bestScore(Z,k,j) YZ X i k j k,X->YZ VP(-NP)NP VP Two string-position indices required to characterize each edge in memory Three string-position indices required to characterize each edge combination in time

23 Lexicalized CKY Lexicalized CKY has the same fundamental operation, just more edges bestScore(X,i,j,h) if (j = i+1) return tagScore(X,s[i]) else return max max score(X[h]->Y[h] Z[h]) * bestScore(Y,i,k,h) * bestScore(Z,k,j,h) max score(X[h]->Y[h] Z[h]) * bestScore(Y,i,k,h) * bestScore(Z,k,j,h) Y[h]Z[h] X[h] i h k h j k,X->YZ VP(-NP)[saw]NP [her] VP [saw] Three string positions for each edge in space Five string positions for each edge combination in time

24 Quartic Parsing Turns out, you can do better [Eisner 99] Gives an O(n 4 ) algorithm Still prohibitive in practice if not pruned Y[h]Z[h] X[h] i h k h j Y[h] Z X[h] i h k j

25 Dependency Parsing Lexicalized parsers can be seen as producing dependency trees Each local binary tree corresponds to an attachment in the dependency graph questioned lawyerwitness the

26 Dependency Parsing Pure dependency parsing is only cubic [Eisner 99] Some work on non-projective dependencies Common in, e.g. Czech parsing Can do with MST algorithms [McDonald and Pereira 05] Leads to O(n 3 ) or even O(n 2 ) [McDonald et al., 2005] Y[h]Z[h] X[h] i h k h j h h h h k h

27 Pruning with Beams The Collins parser prunes with per-cell beams [Collins 99] Essentially, run the O(n 5 ) CKY Remember only a few hypotheses for each span. If we keep K hypotheses at each span, then we do at most O(nK 2 ) work per span (why?) Keeps things more or less cubic Side note/hack: certain spans are forbidden entirely on the basis of punctuation (crucial for speed) Y[h]Z[h] X[h] i h k h j

28 Pruning with a PCFG The Charniak parser prunes using a two-pass approach [Charniak 97+] First, parse with the base grammar For each X:[i,j] calculate P(X|i,j,s) This isnt trivial, and there are clever speed ups Second, do the full O(n 5 ) CKY Skip any X :[i,j] which had low (say, < ) posterior Avoids almost all work in the second phase! Currently the fastest lexicalized parser Charniak et al 06: can use more passes Petrov et al 07: can use many more passes

29 Pruning with A* You can also speed up the search without sacrificing optimality For agenda-based parsers: Can select which items to process first Can do with any figure of merit [Charniak 98] If your figure-of-merit is a valid A* heuristic, no loss of optimiality [Klein and Manning 03] X n0 ij

30 Projection-Based A* Factory payrolls fell in Sept. NPPP VP S Factory payrolls fell in Sept. payrolls in fell Factory payrolls fell in Sept. NP:payrolls PP:in VP:fell S:fell

31 A* Speedup Total time dominated by calculation of A* tables in each projection… O(n 3 )

32 Typical Experimental Setup Corpus: Penn Treebank, WSJ Evaluation by precision, recall, F1 (harmonic mean) Training:sections02-21 Development:section22 Test:section [NP,0,2] [NP,3,5] [NP,6,8] [PP,5,8] [VP,2,5] [VP,2,8] [S,1,8] [NP,0,2] [NP,3,5] [NP,6,8] [PP,5,8] [NP,3,8] [VP,2,8] [S,1,8] Precision=Recall=7/8

33 Results Some results Collins 99 – 88.6 F1 (generative lexical) Petrov et al 06 – 90.7 F1 (generative unlexical) However Bilexical counts rarely make a difference (why?) Gildea 01 – Removing bilexical counts costs < 0.5 F1 Bilexical vs. monolexical vs. smart smoothing

34 Unlexicalized methods So far we have looked at the use of lexicalized methods to fix PCFG independence assumptions Lexicalization creates new complications of inference (computational complexity) and estimation (sparsity) There are lots of improvements to be made without resorting to lexicalization

35 PCFGs and Independence Symbols in a PCFG define independence assumptions: At any node, the material inside that node is independent of the material outside that node, given the label of that node. Any information that statistically connects behavior inside and outside a node must flow through that node. NP S VP S NP VP NP DT NN NP

36 Non-Independence I Independence assumptions are often too strong. Example: the expansion of an NP is highly dependent on the parent of the NP (i.e., subjects vs. objects). Also: the subject and object expansions are correlated! All NPsNPs under SNPs under VP

37 Non-Independence II Who cares? NB, HMMs, all make false assumptions! For generation, consequences would be obvious. For parsing, does it impact accuracy? Symptoms of overly strong assumptions: Rewrites get used where they dont belong. Rewrites get used too often or too rarely. In the PTB, this construction is for possessives

38 Breaking Up the Symbols We can relax independence assumptions by encoding dependencies into the PCFG symbols: What are the most useful features to encode? Parent annotation [Johnson 98] Marking possessive NPs

39 Annotations Annotations split the grammar categories into sub- categories (in the original sense). Conditioning on history vs. annotating P(NP^S PRP) is a lot like P(NP PRP | S) Or equivalently, P(PRP | NP, S) P(NP-POS NNP POS) isnt history conditioning. Feature / unification grammars vs. annotation Can think of a symbol like NP^NP-POS as NP [parent:NP, +POS] After parsing with an annotated grammar, the annotations are then stripped for evaluation.

40 Lexicalization Lexical heads important for certain classes of ambiguities (e.g., PP attachment): Lexicalizing grammar creates a much larger grammar. (cf. next week) Sophisticated smoothing needed Smarter parsing algorithms More data needed How necessary is lexicalization? Bilexical vs. monolexical selection Closed vs. open class lexicalization

41 Unlexicalized PCFGs What is meant by an unlexicalized PCFG? Grammar not systematically specified to the level of lexical items NP [stocks] is not allowed NP^S-CC is fine Closed vs. open class words (NP^S [the]) Long tradition in linguistics of using function words as features or markers for selection Contrary to the bilexical idea of semantic heads Open-class selection really a proxy for semantics Its kind of a gradual transition from unlexicalized to lexicalized (but heavily smoothed) grammars.

42 Typical Experimental Setup Corpus: Penn Treebank, WSJ Accuracy – F1: harmonic mean of per-node labeled precision and recall. Here: also size – number of symbols in grammar. Passive / complete symbols: NP, NP^S Active / incomplete symbols: NP NP CC Training:sections02-21 Development:section22 (here, first 20 files) Test:section23

43 Multiple Annotations Each annotation done in succession Order does matter Too much annotation and well have sparsity issues

44 Horizontal Markovization Order 1 Order

45 Vertical Markovization Vertical Markov order: rewrites depend on past k ancestor nodes. (cf. parent annotation) Order 1 Order 2

46 Markovization This leads to a somewhat more general view of generative probabilistic models over trees Main goal: estimate P( ) A bit of an interlude: Tree-Insertion Grammars deal with this problem more directly.

47 TIG: Insertion

48 Data-oriented parsing (Bod 1992) A case of Tree-Insertion Grammars Rewrite large (possibly lexicalized) subtrees in a single step Derivational ambiguity whether subtrees were generated atomically or compositionally Most probable parse is NP-complete due to unbounded number of rules

49 Markovization, cont. So the question is, how do we estimate these tree probabilities What type of tree-insertion grammar do we use? Equivalently, what type of independence assuptions do we impose? Traditional PCFGs are only one type of answer to this question

50 Vertical and Horizontal Examples: Raw treebank: v=1, h= Johnson 98:v=2, h= Collins 99: v=2, h=2 Best F1: v=3, h=2v ModelF1Size Base: v=h=2v K

51 Unary Splits Problem: unary rewrites used to transmute categories so a high-probability rule can be used. AnnotationF1Size Base K UNARY K Solution: Mark unary rewrite sites with -U

52 Tag Splits Problem: Treebank tags are too coarse. Example: Sentential, PP, and other prepositions are all marked IN. Partial Solution: Subdivide the IN tag. AnnotationF1Size Previous K SPLIT-IN K

53 Other Tag Splits UNARY-DT: mark demonstratives as DT^U (the X vs. those) UNARY-RB: mark phrasal adverbs as RB^U (quickly vs. very) TAG-PA: mark tags with non-canonical parents (not is an RB^VP) SPLIT-AUX: mark auxiliary verbs with –AUX [cf. Charniak 97] SPLIT-CC: separate but and & from other conjunctions SPLIT-%: % gets its own tag. F1Size K K K K K K

54 Treebank Splits The treebank comes with some annotations (e.g., -LOC, -SUBJ, etc). Whole set together hurt the baseline. One in particular is very useful (NP- TMP) when pushed down to the head tag (why?). Can mark gapped S nodes as well. AnnotationF1Size Previous K NP-TMP K GAPPED-S K

55 Yield Splits Problem: sometimes the behavior of a category depends on something inside its future yield. Examples: Possessive NPs Finite vs. infinite VPs Lexical heads! Solution: annotate future elements into nodes. Lexicalized grammars do this (in very careful ways – why?). AnnotationF1Size Previous K POSS-NP K SPLIT-VP K

56 Distance / Recursion Splits Problem: vanilla PCFGs cannot distinguish attachment heights. Solution: mark a property of higher or lower sites: Contains a verb. Is (non)-recursive. Base NPs [cf. Collins 99] Right-recursive NPs AnnotationF1Size Previous K BASE-NP K DOMINATES-V K RIGHT-REC-NP K NP VP PP NP v -v

57 A Fully Annotated (Unlex) Tree

58 Some Test Set Results Beats first generation lexicalized parsers. Lots of room to improve – more complex models next. ParserLPLRF1CB0 CB Magerman Collins Klein & Manning Charniak Collins

59 Unlexicalized grammars: SOTA Klein & Manning 2003s symbol splits were hand- coded Petrov and Klein (2007) used a hierarchical splitting process to learn symbol inventories Reminiscent of decision trees/CART Coarse-to-fine parsing makes it very fast Performance is state of the art!

60 Parse Reranking Nothing weve seen so far allows arbitrarily non-local features Assume the number of parses is very small We can represent each parse T as an arbitrary feature vector (T) Typically, all local rules are features Also non-local features, like how right-branching the overall tree is [Charniak and Johnson 05] gives a rich set of features

61 Parse Reranking Since the number of parses is no longer huge Can enumerate all parses efficiently Can use simple machine learning methods to score trees E.g. maxent reranking: learn a binary classifier over trees where: The top candidates are positive All others are negative Rank trees by P(+|T) The best parsing numbers have mostly been from reranking systems Charniak and Johnson 05 – 89.7 / 91.3 F1 (generative lexical / reranked) McClosky et al 06 – 92.1 F1 (gen + rerank + self-train)

62 Shift-Reduce Parsers Another way to derive a tree: Parsing No useful dynamic programming search Can still use beam search [Ratnaparkhi 97]

63 Derivational Representations Generative derivational models: How is a PCFG a generative derivational model? Distinction between parses and parse derivations. How could there be multiple derivations?

64 Tree-adjoining grammar (TAG) Start with local trees Can insert structure with adjunction operators Mildly context- sensitive Models long- distance dependencies naturally … as well as other weird stuff that CFGs dont capture well (e.g. cross- serial dependencies)

65 TAG: Adjunction

66 TAG: Long Distance

67 TAG: complexity Recall that CFG parsing is O(n 3 ) TAG parsing is O(n 4 ) However, lexicalization causes the same kinds of complexity increases as in CFG YZ X i k j

68 CCG Parsing Combinatory Categorial Grammar Fully (mono-) lexicalized grammar Categories encode argument sequences Very closely related to the lambda calculus (more later) Can have spurious ambiguities (why?)

69 Digression: Is NL a CFG? Cross-serial dependencies in Dutch


Download ppt "Statistical NLP Winter 2009 Lecture 11: Parsing II Roger Levy Thanks to Jason Eisner & Dan Klein for slides."

Similar presentations


Ads by Google