Presentation is loading. Please wait.

Presentation is loading. Please wait.

May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars.

Similar presentations


Presentation on theme: "May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars."— Presentation transcript:

1 May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars

2 May 2006CLINT-LN Parsing2 Chomsky Hierarchy

3 May 2006CLINT-LN Parsing3 Weak Equivalence A grammar should generate all and only sentences in the language under investigation. Let H be language under investigation and G be the grammar we are developing. The grammar should generate all sentences in the language, i.e. for any s in H, s is also in L(G). The grammar should generate only sentences in the language, i.e. for any s in L(G), s is also in H.

4 May 2006CLINT-LN Parsing4 All and Only L(G) G H =

5 May 2006CLINT-LN Parsing5 Overgeneration L(G) H

6 May 2006CLINT-LN Parsing6 Overgeneration Basic Problem: L(G) is larger than H There are sentences generated by the grammar that are not in H. The “only” constraint is violated. The grammar is too weak. Example: a grammar which ignores number and gender

7 May 2006CLINT-LN Parsing7 Undergeneration L(G) H

8 May 2006CLINT-LN Parsing8 Undergeneration Basic Problem: H is larger than L(G) There are sentences in H that are not generated by the grammar. The “all” constraint is violated. The grammar is too strong. Examples (for H = NL): –a grammar which lacks recursion; –a finite state grammar

9 May 2006CLINT-LN Parsing9 Weak and Strong Equivalence A grammar/lexicon G generates a characteristic language L(G) Grammars G1 and G2 are said to be weakly equivalent if L(G1) = L(G2) A grammar G also assigns one or more phrase structures to any s in L(G) Weakly equivalent grammars G1 and G2 are said to be strongly equivalent if in addition they assign identical phrase structures to any s in L(G1).

10 May 2006CLINT-LN Parsing10 Weak Equivalence A  a A  aA A  a A  Aa

11 May 2006CLINT-LN Parsing11 Appropriate Structure The structure assigned by the grammar should be appropriate. The structure should Be understandable Allow us to make generalisations. Reflect the underlying meaning of the sentence.

12 May 2006CLINT-LN Parsing12 Ambiguity A grammar is ambigious if it assigns two or more structures to the same sentence. The grammar should not generate too many possible structures for the same sentence. There is a tradeoff between ambiguity and clarity: too much detail can obscure the design principles. Too little detail means that the grammar is undercommitted,

13 May 2006CLINT-LN Parsing13 Limitations of CF Grammars Simple CF Grammars tend to overgenerate The only mechanism available to control overgeneration is to invent new categories. Proliferation of categories soon becomes intractable. Problems include –Size of grammar –Understandability of grammar

14 May 2006CLINT-LN Parsing14 Criteria for Evaluating Grammars Does it undergenerate? Does it overgenerate? Does it assign appropriate structures to sentences it generates? Is it simple to understand? How many rules are there? Does it contain generalisations or special cases? How ambiguous is it? How many structures for a given sentence?

15 May 2006CLINT-LN Parsing15 CF Phrase Structure Rules s → np vp np → d N vp → V vp → V np (4 rules) Nice grammar – but it overgenerates Solution – invent more categories nps, nppl, vpsn, vppl etc.

16 May 2006CLINT-LN Parsing16 CF Phrase Structure Rules with Number Agreement s -> nps vps s -> nppl vppl nps -> DS NS nppl -> DPL NPL vps -> VS vps -> VS nps vps -> VS nppl vppl -> VPPL vppl -> VPPL nps vppl -> VPPL nppl (10 rules)

17 May 2006CLINT-LN Parsing17 Constraints and Information Structures PATR2 handles this problem by augmenting CF rules with constraints between constituents. Basic idea is that each constituent of a CF rule is associated with an information structure We then express constraints between information structures.

18 May 2006CLINT-LN Parsing18 Example of a PATR rule with Number Constraints Rule s -> np vp =

19 May 2006CLINT-LN Parsing19 Example of a Grammar with Number Constraints s -> np vp = np -> D N = vp -> V =

20 May 2006CLINT-LN Parsing20 Summary Pure CFGs become unwieldy when we try to constrain them to incorporate, for example, agreement information PATR2 deals with this problem by associating information structures and constraints with each rule constituent. Information structures are often referred to as F-structures.

21 May 2006CLINT-LN Parsing21 Grammar versus Parsing A grammar is a description of a language. A grammar abstractly associates structures with all and only the strings of the grammar. A parser is an implementation of an algorithm that actually discovers the structures assigned by a grammar to a sentence. Typically there may be several different parsing algorithms for achieving this. Top down strategy Bottom up strategy

22 May 2006CLINT-LN Parsing22 Parse Tree A valid parse tree for a grammar G is a tree –whose root is the start symbol for G –whose interior nodes are nonterminals of G –whose children of a node T (from left to right) correspond to the symbols on the right hand side of some production for T in G. –whose leaf nodes are terminal symbols of G. Every sentence generated by a grammar has a corresponding parse tree Every valid parse tree exactly covers a sentence generated by the grammar

23 May 2006CLINT-LN Parsing23 Parsing Problem Given grammar G and sentence A find all valid parse trees for G that exactly cover A S VP NP V Det Nom N book that flight

24 May 2006CLINT-LN Parsing24 Soundness and Completeness A parser is sound if every parse tree it returns is valid. A parser is complete for grammar G if for all s  L(G) –it terminates –it produces the corresponding parse tree For many purposes, we settle for sound but incomplete parsers

25 May 2006CLINT-LN Parsing25 Top Down Top down parser tries to build from the root node S down to the leaves by replacing nodes with non-terminal labels with RHS of corresponding grammar rules. Nodes with pre-terminal (word class) labels are compared to input words.

26 May 2006CLINT-LN Parsing26 Top Down Search Space Start node → Goal node ↓

27 May 2006CLINT-LN Parsing27 Bottom Up Each state is a forest of trees. Start node is a forest of nodes labelled with pre-terminal categories (word classes derived from lexicon) Transformations look for places where RHS of rules can fit. Any such place is replaced with a node labelled with LHS of rule.

28 May 2006CLINT-LN Parsing28 Bottom Up Search Space fl

29 May 2006CLINT-LN Parsing29 Top Down vs Bottom Up General Top down –For: Never wastes time exploring trees that cannot be derived from S –Against: Can generate trees that are not consistent with the input Bottom up –For: Never wastes time building trees that cannot lead to input text segments. –Against: Can generate subtrees that can never lead to an S node.

30 May 2006CLINT-LN Parsing30 Top Down Parsing - Remarks Top-down parsers do well if there is useful grammar driven control: search can be directed by the grammar. Left recursive rules can cause problems. A top-down parser will do badly if there are many different rules for the same LHS. Consider if there are 600 rules for S, 599 of which start with NP, but one of which starts with V, and the sentence starts with V. Top-down is unsuitable for rewriting parts of speech (preterminals) with words (terminals). In practice that is always done bottom-up as lexical lookup. Useless work: expands things that are possible top-down but not there. Repeated work: anywhere there is common substructure

31 May 2006CLINT-LN Parsing31 Bottom Up Parsing - Remarks Empty categories: termination problem unless rewriting of empty constituents is somehow restricted (but then it’s generally incomplete) Inefficient when there is great lexical ambiguity (grammar driven control might help here) Conversely, it is data-directed: it attempts to parse the words that are there. Both TD (LL) and BU (LR) parsers can do work exponential in the sentence length on NLP problems Useless work: locally possible, but globally impossible. Repeated work: anywhere there is common substructure

32 May 2006CLINT-LN Parsing32 Development of a Concrete Strategy Combine best features of both top down and bottom up strategies. –Top down, grammar directed control. –Bottom up filtering. Examination of alternatives in parallel uses too much memory. Depth first strategy using agenda-based control.


Download ppt "May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars."

Similar presentations


Ads by Google