1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.

Slides:



Advertisements
Similar presentations
Basic Parsing with Context-Free Grammars CS 4705 Julia Hirschberg 1 Some slides adapted from Kathy McKeown and Dan Jurafsky.
Advertisements

Probabilistic and Lexicalized Parsing CS Probabilistic CFGs: PCFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations.
Natural Language Processing - Parsing 1 - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment / Binding Bottom vs. Top Down Parsing.
PARSING WITH CONTEXT-FREE GRAMMARS
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 11 Giuseppe Carenini.
CS Basic Parsing with Context-Free Grammars.
Parsing context-free grammars Context-free grammars specify structure, not process. There are many different ways to parse input in accordance with a given.
Parsing with CFG Ling 571 Fei Xia Week 2: 10/4-10/6/05.
Earley’s algorithm Earley’s algorithm employs the dynamic programming technique to address the weaknesses of general top-down parsing. Dynamic programming.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
CS 4705 Basic Parsing with Context-Free Grammars.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language Syntax Parsing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
BİL711 Natural Language Processing1 Statistical Parse Disambiguation Problem: –How do we disambiguate among a set of parses of a given sentence? –We want.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
CS 4705 Parsing More Efficiently and Accurately. Review Top-Down vs. Bottom-Up Parsers Left-corner table provides more efficient look- ahead Left recursion.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
1 CKY and Earley Algorithms Chapter 13 October 2012 Lecture #8.
LINGUISTICA GENERALE E COMPUTAZIONALE ANALISI SINTATTICA (PARSING)
10. Parsing with Context-free Grammars -Speech and Language Processing- 발표자 : 정영임 발표일 :
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
PARSING David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Page 1 Probabilistic Parsing and Treebanks L545 Spring 2000.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Albert Gatt Corpora and Statistical Methods Lecture 11.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
Natural Language - General
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
Quick Speech Synthesis CMSC Natural Language Processing April 29, 2003.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
December 2011CSA3202: PCFGs1 CSA3202: Human Language Technology Probabilistic Phrase Structure Grammars (PCFGs)
Instructor: Nick Cercone CSEB - 1 Parsing and Context Free Grammars Parsers, Top Down, Bottom Up, Left Corner, Earley.
October 2005CSA3180: Parsing Algorithms 21 CSA3050: NLP Algorithms Parsing Algorithms 2 Problems with DFTD Parser Earley Parsing Algorithm.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Speech and Language Processing SLP Chapter 13 Parsing.
Probabilistic and Lexicalized Parsing. Probabilistic CFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations –Use weights to.
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CS60057 Speech &Natural Language Processing
Basic Parsing with Context Free Grammars Chapter 13
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
Lecture 14: Grammar and Parsing (II) November 11, 2004 Dan Jurafsky
CSCI 5832 Natural Language Processing
Natural Language - General
Parsing and More Parsing
Parsing I: CFGs & the Earley Parser
CPSC 503 Computational Linguistics
David Kauchak CS159 – Spring 2019
CPSC 503 Computational Linguistics
Presentation transcript:

1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg

2 Homework Announcements and Questions? Last year’s performance – Source classification: 89.7% average accuracy, SD of 5 – Topic classification: 37.1% average accuracy, SD of 13 Topic classification is actually 12-way classification; no document is tagged with BT_8 (finance)

3 What’s right/wrong with…. Top-Down parsers – they never explore illegal parses (e.g. which can’t form an S) -- but waste time on trees that can never match the input. May reparse the same constituent repeatedly. Top-Down parsers Bottom-Up parsers – they never explore trees inconsistent with input -- but waste time exploring illegal parses (with no S root) Bottom-Up parsers For both: find a control strategy -- how explore search space efficiently? – Pursuing all parses in parallel or backtrack or …? – Which rule to apply next? – Which node to expand next?

4 Some Solutions Dynamic Programming Approaches – Use a chart to represent partial results CKY Parsing Algorithm – Bottom-up – Grammar must be in Normal Form – The parse tree might not be consistent with linguistic theory Early Parsing Algorithm – Top-down – Expectations about constituents are confirmed by input – A POS tag for a word that is not predicted is never added Chart Parser

5 Earley Intuition 1. Extend all rules top-down, creating predictions 2. Read a word 1. When word matches prediction, extend remainder of rule 2. Add new predictions 3. Go to 2 3. Look at N+1 to see if you have a winner

6 Earley Parsing Allows arbitrary CFGs Fills a table in a single sweep over the input words – Table is length N+1; N is number of words – Table entries represent Completed constituents and their locations In-progress constituents Predicted constituents

7 States The table-entries are called states and are represented with dotted-rules. S -> · VPA VP is predicted NP -> Det · NominalAn NP is in progress VP -> V NP · A VP has been found

8 States/Locations It would be nice to know where these things are in the input so… S -> · VP [0,0]A VP is predicted at the start of the sentence NP -> Det · Nominal [1,2]An NP is in progress; the Det goes from 1 to 2 VP -> V NP · [0,3]A VP has been found starting at 0 and ending at 3

9 Graphically

10 Earley As with most dynamic programming approaches, the answer is found by looking in the table in the right place. In this case, there should be an S state in the final column that spans from 0 to n+1 and is complete. If that’s the case you’re done. – S –> α · [0,n+1]

11 Earley Algorithm March through chart left-to-right. At each step, apply 1 of 3 operators – Predictor Create new states representing top-down expectations – Scanner Match word predictions (rule with word after dot) to words – Completer When a state is complete, see what rules were looking for that completed constituent

12 Predictor Given a state – With a non-terminal to right of dot – That is not a part-of-speech category – Create a new state for each expansion of the non-terminal – Place these new states into same chart entry as generated state, beginning and ending where generating state ends. – So predictor looking at S ->. VP [0,0] – results in VP ->. Verb [0,0] VP ->. Verb NP [0,0]

13 Scanner Given a state – With a non-terminal to right of dot – That is a part-of-speech category – If the next word in the input matches this part-of-speech – Create a new state with dot moved over the non-terminal – So scanner looking at VP ->. Verb NP [0,0] – If the next word, “book”, can be a verb, add new state: VP -> Verb. NP [0,1] – Add this state to chart entry following current one – Note: Earley algorithm uses top-down input to disambiguate POS! Only POS predicted by some state can get added to chart!

14 Completer Applied to a state when its dot has reached right end of rule. Parser has discovered a category over some span of input. Find and advance all previous states that were looking for this category – copy state, move dot, insert in current chart entry Given: – NP -> Det Nominal. [1,3] – VP -> Verb. NP [0,1] Add – VP -> Verb NP. [0,3]

15 Earley: how do we know we are done? How do we know when we are done?. Find an S state in the final column that spans from 0 to n+1 and is complete. If that’s the case you’re done. – S –> α · [0,n+1]

16 Earley More specifically… 1. Predict all the states you can upfront 2. Read a word 1. Extend states based on matches 2. Add new predictions 3. Go to 2 3. Look at N+1 to see if you have a winner

17 Example Book that flight We should find… an S from 0 to 3 that is a completed state…

18 Sample Grammar

19 Example

20 Example

21 Example

22 Details What kind of algorithms did we just describe – Not parsers – recognizers The presence of an S state with the right attributes in the right place indicates a successful recognition. But no parse tree… no parser That’s how we solve (not) an exponential problem in polynomial time

23 Converting Earley from Recognizer to Parser With the addition of a few pointers we have a parser Augment the “Completer” to point to where we came from.

Augmenting the chart with structural information S8 S9 S10 S11 S13 S12 S8 S9 S8

25 Retrieving Parse Trees from Chart All the possible parses for an input are in the table We just need to read off all the backpointers from every complete S in the last column of the table Find all the S -> X. [0,N+1] Follow the structural traces from the Completer Of course, this won’t be polynomial time, since there could be an exponential number of trees So we can at least represent ambiguity efficiently

26 Left Recursion vs. Right Recursion Depth-first search will never terminate if grammar is left recursive (e.g. NP --> NP PP)

Solutions: – Rewrite the grammar (automatically?) to a weakly equivalent one which is not left-recursive e.g. The man {on the hill with the telescope…} NP  NP PP (wanted: Nom plus a sequence of PPs) NP  Nom PP NP  Nom Nom  Det N …becomes… NP  Nom NP’ Nom  Det N NP’  PP NP’ (wanted: a sequence of PPs) NP’  e Not so obvious what these rules mean…

28 – Harder to detect and eliminate non-immediate left recursion – NP --> Nom PP – Nom --> NP – Fix depth of search explicitly – Rule ordering: non-recursive rules first NP --> Det Nom NP --> NP PP

29 Another Problem: Structural ambiguity Multiple legal structures – Attachment (e.g. I saw a man on a hill with a telescope) – Coordination (e.g. younger cats and dogs) – NP bracketing (e.g. Spanish language teachers)

30 NP vs. VP Attachment

31 Solution? – Return all possible parses and disambiguate using “other methods”

32 Probabilistic Parsing

33 How to do parse disambiguation Probabilistic methods Augment the grammar with probabilities Then modify the parser to keep only most probable parses And at the end, return the most probable parse

34 Probabilistic CFGs The probabilistic model – Assigning probabilities to parse trees Getting the probabilities for the model Parsing with probabilities – Slight modification to dynamic programming approach – Task is to find the max probability tree for an input

35 Probability Model Attach probabilities to grammar rules The expansions for a given non-terminal sum to 1 VP -> Verb.55 VP-> Verb NP.40 VP-> Verb NP NP.05 – Read this as P(Specific rule | LHS)

36 PCFG

37 PCFG

38 Probability Model (1) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a tree is just the product of the probabilities of the rules in the derivation.

39 Probability model P(T,S) = P(T)P(S|T) = P(T); since P(S|T)=1

40 Probability Model (1.1) The probability of a word sequence P(S) is the probability of its tree in the unambiguous case. It’s the sum of the probabilities of the trees in the ambiguous case.

41 Getting the Probabilities From an annotated database (a treebank) – So for example, to get the probability for a particular VP rule just count all the times the rule is used and divide by the number of VPs overall.

42 TreeBanks

43 Treebanks

44 Treebanks

45 Treebank Grammars

46 Lots of flat rules

47 Example sentences from those rules Total: over 17,000 different grammar rules in the 1-million word Treebank corpus

48 Probabilistic Grammar Assumptions We’re assuming that there is a grammar to be used to parse with. We’re assuming the existence of a large robust dictionary with parts of speech We’re assuming the ability to parse (i.e. a parser) Given all that… we can parse probabilistically

49 Typical Approach Bottom-up (CKY) dynamic programming approach Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each constituent going up

50 What’s that last bullet mean? Say we’re talking about a final part of a parse – S-> 0 NP i VP j The probability of the S is… P(S->NP VP)*P(NP)*P(VP) The green stuff is already known. We’re doing bottom-up parsing

51 Max I said the P(NP) is known. What if there are multiple NPs for the span of text in question (0 to i)? Take the max (where?)

52 Problems with PCFGs The probability model we’re using is just based on the rules in the derivation… – Doesn’t use the words in any real way – Doesn’t take into account where in the derivation a rule is used

53 Solution Add lexical dependencies to the scheme… – Infiltrate the predilections of particular words into the probabilities in the derivation – I.e. Condition the rule probabilities on the actual words

54 Heads To do that we’re going to make use of the notion of the head of a phrase – The head of an NP is its noun – The head of a VP is its verb – The head of a PP is its preposition (It’s really more complicated than that but this will do.)

55 Example (right) Attribute grammar

56 Example (wrong)

57 How? We used to have – VP -> V NP PP P(rule|VP) That’s the count of this rule divided by the number of VPs in a treebank Now we have – VP(dumped)-> V(dumped) NP(sacks)PP(in) – P(r|VP ^ dumped is the verb ^ sacks is the head of the NP ^ in is the head of the PP) – Not likely to have significant counts in any treebank

58 Declare Independence When stuck, exploit independence and collect the statistics you can… We’ll focus on capturing two things – Verb subcategorization Particular verbs have affinities for particular VPs – Objects affinities for their predicates (mostly their mothers and grandmothers) Some objects fit better with some predicates than others

59 Subcategorization Condition particular VP rules on their head… so r: VP -> V NP PP P(r|VP) Becomes P(r | VP ^ dumped) What’s the count? How many times was this rule used with (head) dump, divided by the number of VPs that dump appears (as head) in total

60 Example (right) Attribute grammar

61 Probability model P(T,S) = S-> NP VP (.5)* VP(dumped) -> V NP PP (.5) (T1) VP(ate) -> V NP PP (.03) VP(dumped) -> V NP (.2) (T2)

62 Preferences Subcategorization captures the affinity between VP heads (verbs) and the VP rules they go with. What about the affinity between VP heads and the heads of the other daughters of the VP Back to our examples…

63 Example (right)

Example (wrong)

65 Preferences The issue here is the attachment of the PP. So the affinities we care about are the ones between dumped and into vs. sacks and into. So count the places where dumped is the head of a constituent that has a PP daughter with into as its head and normalize Vs. the situation where sacks is a constituent with into as the head of a PP daughter.

66 Probability model P(T,S) = S-> NP VP (.5)* VP(dumped) -> V NP PP(into) (.7) (T1) NOM(sacks) -> NOM PP(into) (.01) (T2)

67 Preferences (2) Consider the VPs – Ate spaghetti with gusto – Ate spaghetti with marinara The affinity of gusto for eat is much larger than its affinity for spaghetti On the other hand, the affinity of marinara for spaghetti is much higher than its affinity for ate

68 Preferences (2) Note the relationship here is more distant and doesn’t involve a headword since gusto and marinara aren’t the heads of the PPs. Vp (ate) Pp(with) Np(spag) np v v Ate spaghetti with marinara Ate spaghetti with gusto np

69 Summary Context-Free Grammars Parsing – Top Down, Bottom Up Metaphors – Dynamic Programming Parsers: CKY. Earley Disambiguation: – PCFG – Probabilistic Augmentations to Parsers – Tradeoffs: accuracy vs. data sparcity – Treebanks