David Kauchak CS159 – Spring 2019

Slides:



Advertisements
Similar presentations
Basic Parsing with Context-Free Grammars CS 4705 Julia Hirschberg 1 Some slides adapted from Kathy McKeown and Dan Jurafsky.
Advertisements

Natural Language Processing - Parsing 1 - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment / Binding Bottom vs. Top Down Parsing.
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Approaches to Parsing.
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
Introduction and Jurafsky Model Resource: A Probabilistic Model of Lexical and Syntactic Access and Disambiguation, Jurafsky 1996.
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
CKY Parsing Ling 571 Deep Processing Techniques for NLP January 12, 2011.
For Monday Read Chapter 23, sections 3-4 Homework –Chapter 23, exercises 1, 6, 14, 19 –Do them in order. Do NOT read ahead.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
Parsing context-free grammars Context-free grammars specify structure, not process. There are many different ways to parse input in accordance with a given.
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language Syntax Parsing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Syllabus Text Books Classes Reading Material Assignments Grades Links Forum Text Books עיבוד שפות טבעיות - שיעור תשע Bottom Up Parsing עידו דגן.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
PARSING David Kauchak CS457 – Fall 2011 some slides adapted from Ray Mooney.
For Friday Finish chapter 23 Homework: –Chapter 22, exercise 9.
GRAMMARS David Kauchak CS159 – Fall 2014 some slides adapted from Ray Mooney.
SI485i : NLP Set 8 PCFGs and the CKY Algorithm. PCFGs We saw how CFGs can model English (sort of) Probabilistic CFGs put weights on the production rules.
11 Syntactic Parsing. Produce the correct syntactic parse tree for a sentence.
October 2005csa3180: Parsing Algorithms 11 CSA350: NLP Algorithms Sentence Parsing I The Parsing Problem Parsing as Search Top Down/Bottom Up Parsing Strategies.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
PARSING David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
For Wednesday Read chapter 23 Homework: –Chapter 22, exercises 1,4, 7, and 14.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
Natural Language - General
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
NLP. Introduction to NLP Motivation –A lot of the work is repeated –Caching intermediate results improves the complexity Dynamic programming –Building.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
csa3050: Parsing Algorithms 11 CSA350: NLP Algorithms Parsing Algorithms 1 Top Down Bottom-Up Left Corner.
GRAMMARS David Kauchak CS457 – Spring 2011 some slides adapted from Ray Mooney.
October 2005CSA3180: Parsing Algorithms 21 CSA3050: NLP Algorithms Parsing Algorithms 2 Problems with DFTD Parser Earley Parsing Algorithm.
NLP. Introduction to NLP #include int main() { int n, reverse = 0; printf("Enter a number to reverse\n"); scanf("%d",&n); while (n != 0) { reverse =
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Speech and Language Processing SLP Chapter 13 Parsing.
1 Statistical methods in NLP Diana Trandabat
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
Statistical NLP Winter 2009
CS60057 Speech &Natural Language Processing
Basic Parsing with Context Free Grammars Chapter 13
CKY Parser 0Book 1 the 2 flight 3 through 4 Houston5 6/19/2018
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
Probabilistic CKY Parser
CS : Speech, NLP and the Web/Topics in AI
CS 388: Natural Language Processing: Statistical Parsing
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CSCI 5832 Natural Language Processing
CKY Parser 0Book 1 the 2 flight 3 through 4 Houston5 11/16/2018
CS 388: Natural Language Processing: Syntactic Parsing
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
Lecture 14: Grammar and Parsing (II) November 11, 2004 Dan Jurafsky
CSCI 5832 Natural Language Processing
Natural Language - General
Parsing and More Parsing
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CSA2050 Introduction to Computational Linguistics
David Kauchak CS159 – Spring 2019
Parsing I: CFGs & the Earley Parser
NLP.
Presentation transcript:

David Kauchak CS159 – Spring 2019 Parsing David Kauchak CS159 – Spring 2019

Admin Assignment 3 Quiz #1 Q1: 28 (77%) Q2: 32 (89%) Q3: 33 (92%) Average: 30 (84%)

Parsing Parsing is the field of NLP interested in automatically determining the syntactic structure of a sentence parsing can also be thought of as determining what sentences are “valid” English sentences

Parsing approaches? algorithms? We have a grammar, determine the possible parse tree(s) Let’s start with parsing with a CFG (no probabilities) I eat sushi with tuna S  NP VP NP  PRP NP  N PP VP  V NP VP  V NP PP PP  IN N PRP  I V  eat N  sushi N  tuna IN  with approaches? algorithms?

Parsing Top-down parsing Bottom-up parsing ends up doing a lot of repeated work doesn’t take into account the words in the sentence until the end! Bottom-up parsing constrain based on the words avoids repeated work (dynamic programming) doesn’t take into account the high-level structure until the end! CKY parser

Parsing Top-down parsing start at the top (usually S) and apply rules matching left-hand sides and replacing with right-hand sides Bottom-up parsing start at the bottom (i.e. words) and build the parse tree up from there matching right-hand sides and replacing with left-hand sides

Parsing Example S VP Verb NP book that flight book Det Nominal that Noun flight

Top Down Parsing S NP VP Pronoun

Top Down Parsing S NP VP Pronoun book X

Top Down Parsing S NP VP ProperNoun

Top Down Parsing S NP VP ProperNoun book X

Top Down Parsing S NP VP Det Nominal

Top Down Parsing S NP VP Det Nominal book X

Top Down Parsing S Aux NP VP

Top Down Parsing S Aux NP VP book X

Top Down Parsing S VP

Top Down Parsing S VP Verb

Top Down Parsing S VP Verb book

Top Down Parsing S VP Verb X book that

Top Down Parsing S VP Verb NP

Top Down Parsing S VP Verb NP book

Top Down Parsing S VP Verb NP book Pronoun

Top Down Parsing S VP Verb NP book Pronoun X that

Top Down Parsing S VP Verb NP book ProperNoun

Top Down Parsing S VP Verb NP book ProperNoun X that

Top Down Parsing S VP Verb NP book Det Nominal

Top Down Parsing S VP Verb NP book Det Nominal that

Top Down Parsing S VP Verb NP book Det Nominal that Noun

Top Down Parsing S VP Verb NP book Det Nominal that Noun flight

Bottom Up Parsing book that flight

Bottom Up Parsing Noun book that flight

Bottom Up Parsing Nominal Noun book that flight

Bottom Up Parsing Nominal Nominal Noun Noun book that flight

Bottom Up Parsing Nominal Nominal Noun X Noun book that flight

Bottom Up Parsing Nominal Nominal PP Noun book that flight 35

Bottom Up Parsing Nominal Nominal PP Noun Det book that flight 36

Bottom Up Parsing Nominal Nominal PP NP Noun Det Nominal book that flight 37

Bottom Up Parsing Nominal Nominal PP NP Noun Det Nominal book that flight 38

Bottom Up Parsing Nominal Nominal PP NP Noun Det Nominal book that flight 39

Bottom Up Parsing Nominal S Nominal PP NP VP Noun Det Nominal book that Noun flight 40

Bottom Up Parsing X Nominal S Nominal PP NP VP Noun Det Nominal book that Noun flight 41

Bottom Up Parsing X Nominal Nominal PP NP Noun Det Nominal book that flight 42

Bottom Up Parsing NP Verb Det Nominal book that Noun flight

Bottom Up Parsing VP NP Verb Det Nominal book that Noun flight

Bottom Up Parsing S VP NP Verb Det Nominal book that Noun flight

Bottom Up Parsing S X VP NP Verb Det Nominal book that Noun flight 46

Bottom Up Parsing VP VP PP NP Verb Det Nominal book that Noun flight

Bottom Up Parsing X VP VP PP NP Verb Det Nominal book that Noun flight 48

Bottom Up Parsing VP NP NP Verb Det Nominal book that Noun flight

Bottom Up Parsing VP NP Verb Det Nominal book that Noun flight

Bottom Up Parsing S VP NP Verb Det Nominal book that Noun flight

Parsing Pros/Cons? Top-down: Bottom-up: Only examines parses that could be valid parses (i.e. with an S on top) Doesn’t take into account the actual words! Bottom-up: Only examines structures that have the actual words as the leaves Examines sub-parses that may NOT result in a valid parse!

Why is parsing hard? Actual grammars are large Lots of ambiguity! Most sentences have many parses Some sentences have a lot of parses Even for sentences that are not ambiguous, there is often ambiguity for subtrees (i.e. multiple ways to parse a phrase)

Why is parsing hard? What are some interpretations? I saw the man on the hill with the telescope What are some interpretations?

Structural Ambiguity Can Give Exponential Parses “I was on the hill that has a telescope when I saw a man.” “I saw a man who was on a hill and who had a telescope.” “Using a telescope, I saw a man who was on a hill.” “I saw a man who was on the hill that has a telescope on it.” “I was on the hill when I used the telescope to see a man.” . . . I saw the man on the hill with the telescope Me See A man The telescope The hill

Dynamic Programming Parsing To avoid extensive repeated work you must cache intermediate results, specifically found constituents Caching (memoizing) is critical to obtaining a polynomial time parsing algorithm for CFGs Dynamic programming algorithms based on both top- down and bottom-up search can achieve O(n3) recognition time where n is the length of the input string.

Dynamic Programming Parsing Methods CKY (Cocke-Kasami-Younger) algorithm based on bottom-up parsing and requires first normalizing the grammar. Earley parser is based on top-down parsing and does not require normalizing grammar but is more complex. These both fall under the general category of chart parsers which retain completed constituents in a chart

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j what does this cell represent?

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j all constituents spanning 1-3 or “the man with”

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j how could we figure this out?

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j Key: rules are binary and only have two constituents on the right hand side VP -> VB NP NP -> DT NN

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “the” with any for “man with”

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “the man” with any for “with”

CKY parser: the chart ? Film the man with trust Cell[i,j] contains all 1 2 3 4 ? Cell[i,j] contains all constituents covering words i through j What combinations do we need to consider when trying to put constituents here?

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “Film” with any for “the man with trust”

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “Film the” with any for “man with trust”

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “Film the man” with any for “with trust”

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “Film the man with” with any for “trust”

CKY parser: the chart What if our rules weren’t binary? Film the man with trust j= 0 1 2 3 4 i= 1 2 3 4 Cell[i,j] contains all constituents covering words i through j What if our rules weren’t binary?

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j See if we can make a new constituent combining any for “Film” with any for “the man” with any for “with trust”

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j What order should we fill the entries in the chart?

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j Our dependencies are left and down

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j From bottom to top, left to right

CKY parser: the chart Film the man with trust Cell[i,j] contains all 1 2 3 4 Cell[i,j] contains all constituents covering words i through j Top-left along the diagonals moving to the right

CKY parser: unary rules Often, we will leave unary rules rather than converting to CNF Do these complicate the algorithm? Must check whenever we add a constituent to see if any unary rules apply S -> VP VP -> VB NP VP -> VP2 PP VP2 -> VB NP NP -> DT NN NP -> NN NP -> NP PP PP -> IN NP DT -> the IN -> with VB -> film VB -> trust NN -> man NN -> film NN -> trust

CKY parser: the chart Film the man with trust j= 0 1 2 3 4 S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN NP  NP PP PP  IN NP DT  the IN  with VB  film VB  man VB  trust NN  man NN  film NN  trust i= 1 2 3 4

CKY parser: the chart Film the man with trust j= 0 1 2 3 4 S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN NP  NP PP PP  IN NP DT  the IN  with VB  film VB  man VB  trust NN  man NN  film NN  trust i= 1 2 3 4 NN NP VB DT VB NN NP IN VB NN NP

CKY parser: the chart Film the man with trust j= 0 1 2 3 4 S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN NP  NP PP PP  IN NP DT  the IN  with VB  film VB  man VB  trust NN  man NN  film NN  trust i= 1 2 3 4 NN NP VB NP DT VB NN NP IN PP VB NN NP

CKY parser: the chart Film the man with trust j= 0 1 2 3 4 S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN NP  NP PP PP  IN NP DT  the IN  with VB  film VB  man VB  trust NN  man NN  film NN  trust i= 1 2 3 4 NN NP VB VP2 VP S NP DT VB NN NP NP IN PP VB NN NP

CKY parser: the chart Film the man with trust j= 0 1 2 3 4 S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN NP  NP PP PP  IN NP DT  the IN  with VB  film VB  man VB  trust NN  man NN  film NN  trust i= 1 2 3 4 NN NP VB VP2 VP S NP DT NP VB NN NP NP IN PP VB NN NP

CKY parser: the chart Film the man with trust j= 0 1 2 3 4 S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN NP  NP PP PP  IN NP DT  the IN  with VB  film VB  man VB  trust NN  man NN  film NN  trust i= 1 2 3 4 NN NP VB VP2 VP S S VP VP2 NP DT NP VB NN NP NP IN PP VB NN NP

CKY: some things to talk about After we fill in the chart, how do we know if there is a parse? If there is an S in the upper right corner What if we want an actual tree/parse?

CKY: retrieving the parse Film the man with trust S j= 0 1 2 3 4 i= 1 2 3 4 NN NP VB S VP VB2 VP S VP NP VB NP DT NP VB NN NP NP IN PP VB NN NP

CKY: retrieving the parse Film the man with trust S j= 0 1 2 3 4 i= 1 2 3 4 NN NP VB S VP VB2 VP S VP NP VB NP DT NP NP PP VB NN NP NP IN PP VB NN NP

CKY: retrieving the parse Film the man with trust S j= 0 1 2 3 4 i= 1 2 3 4 S VP NN NP VB VB2 VP S VP NP VB NP DT NP NP PP VB NN NP NP DT NN IN NP IN PP … VB NN NP

CKY: retrieving the parse Film the man with trust j= 0 1 2 3 4 i= 1 2 3 4 S VP NN NP VB VB2 VP S Where do these arrows/references come from? NP DT NP VB NN NP NP IN PP VB NN NP

CKY: retrieving the parse Film the man with trust j= 0 1 2 3 4 i= 1 2 3 4 S VP NN NP VB VB2 VP S S  VP NP DT NP To add a constituent in a cell, we’re applying a rule The references represent the smaller constituents we used to build this constituent VB NN NP NP IN PP VB NN NP

CKY: retrieving the parse Film the man with trust j= 0 1 2 3 4 i= 1 2 3 4 NN NP VB S VP VB2 VP S VP  VB NP NP DT NP To add a constituent in a cell, we’re applying a rule The references represent the smaller constituents we used to build this constituent VB NN NP NP IN PP VB NN NP

CKY: retrieving the parse Film the man with trust j= 0 1 2 3 4 i= 1 2 3 4 S VP NN NP VB VB2 VP S What about ambiguous parses? NP DT NP VB NN NP NP IN PP VB NN NP

CKY: retrieving the parse We can store multiple derivations of each constituent This representation is called a “parse forest” It is often convenient to leave it in this form, rather than enumerate all possible parses. Why?

CKY: some things to think about CNF Actual grammar S  VP VP  VB NP VP  VP2 PP VP2  VB NP NP  DT NN NP  NN … S  VP VP  VB NP VP  VB NP PP NP  DT NN NP  NN … but want one for the actual grammar We get a CNF parse tree Ideas?

Parsing ambiguity How can we decide between these? S  NP VP NP  PRP NP  N PP VP  V NP VP  V NP PP PP  IN N PRP  I V  eat N  sushi N  tuna IN  with S S VP VP NP NP NP PP NP PP PRP V N IN N PRP V N IN N I eat sushi with tuna I eat sushi with tuna How can we decide between these?

A Simple PCFG Probabilities! S  NP VP 1.0 VP  V NP 0.7 VP  VP PP 0.3 PP  P NP 1.0 P  with 1.0 V  saw 1.0 NP  NP PP 0.4 NP  astronomers 0.1 NP  ears 0.18 NP  saw 0.04 NP  stars 0.18 NP  telescope 0.1

= 1.0 * 0.1 * 0.7 * 1.0 * 0.4 * 0.18 * 1.0 * 1.0 * 0.18 = 0.0009072 = 1.0 * 0.1 * 0.3 * 0.7 * 1.0 * 0.18 * 1.0 * 1.0 * 0.18 = 0.0006804

Parsing with PCFGs How does this change our CKY algorithm? We need to keep track of the probability of a constituent How do we calculate the probability of a constituent? Product of the PCFG rule times the product of the probabilities of the sub-constituents (right hand sides) Building up the product from the bottom-up What if there are multiple ways of deriving a particular constituent? max: pick the most likely derivation of that constituent

Probabilistic CKY Include in each cell a probability for each non-terminal Cell[i,j] must retain the most probable derivation of each constituent (non-terminal) covering words i through j When transforming the grammar to CNF, must set production probabilities to preserve the probability of derivations

Probabilistic Grammar Conversion Original Grammar Chomsky Normal Form S → NP VP S → Aux NP VP S → VP NP → Pronoun NP → Proper-Noun NP → Det Nominal Nominal → Noun Nominal → Nominal Noun Nominal → Nominal PP VP → Verb VP → Verb NP VP → VP PP PP → Prep NP 0.8 0.1 0.2 0.6 0.3 0.5 1.0 S → NP VP S → X1 VP X1 → Aux NP S → book | include | prefer 0.01 0.004 0.006 S → Verb NP S → VP PP NP → I | he | she | me 0.1 0.02 0.02 0.06 NP → Houston | NWA 0.16 .04 NP → Det Nominal Nominal → book | flight | meal | money 0.03 0.15 0.06 0.06 Nominal → Nominal Noun Nominal → Nominal PP VP → book | include | prefer 0.1 0.04 0.06 VP → Verb NP VP → VP PP PP → Prep NP 0.8 0.1 1.0 0.05 0.03 0.6 0.2 0.5 0.3 97

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 None Det:.6 Nominal:.15 Noun:.5 NP → Det Nominal 0.60 What is the probability of the NP? 98

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 None NP:.6*.6*.15 =.054 Det:.6 Nominal:.15 Noun:.5 NP → Det Nominal 0.60 99

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 None NP:.6*.6*.15 =.054 Det:.6 Nominal:.15 Noun:.5 VP → Verb NP 0.5 What is the probability of the VP? 100

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 VP:.5*.5*.054 =.0135 None NP:.6*.6*.15 =.054 Det:.6 Nominal:.15 Noun:.5 VP → Verb NP 0.5 101

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 VP:.5*.5*.054 =.0135 None NP:.6*.6*.15 =.054 Det:.6 Nominal:.15 Noun:.5 102

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 VP:.5*.5*.054 =.0135 None None NP:.6*.6*.15 =.054 None Det:.6 Nominal:.15 Noun:.5 None Prep:.2 103

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 VP:.5*.5*.054 =.0135 None None NP:.6*.6*.15 =.054 None Det:.6 Nominal:.15 Noun:.5 None PP:1.0*.2*.16 =.032 Prep:.2 NP:.16 PropNoun:.8 104

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 VP:.5*.5*.054 =.0135 None None NP:.6*.6*.15 =.054 None Det:.6 Nominal:.15 Noun:.5 Nominal: .5*.15*.032 =.0024 None PP:1.0*.2*.16 =.032 Prep:.2 NP:.16 PropNoun:.8 105

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 VP:.5*.5*.054 =.0135 None None NP:.6*.6* .0024 =.000864 NP:.6*.6*.15 =.054 None Det:.6 Nominal:.15 Noun:.5 Nominal: .5*.15*.032 =.0024 None PP:1.0*.2*.16 =.032 Prep:.2 NP:.16 PropNoun:.8 106

Probabilistic CKY Parser Book the flight through Houston S:.05*.5* .000864 =.0000216 S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 VP:.5*.5*.054 =.0135 None None NP:.6*.6* .0024 =.000864 S:.03*.0135* .032 =.00001296 NP:.6*.6*.15 =.054 None Det:.6 Nominal:.15 Noun:.5 Nominal: .5*.15*.032 =.0024 Which parse do we pick? None PP:1.0*.2*.16 =.032 Prep:.2 S → VP PP 0.03 S → Verb NP 0.05 NP:.16 PropNoun:.8 107

Probabilistic CKY Parser Book the flight through Houston S :.01, VP:.1, Verb:.5 Nominal:.03 Noun:.1 S:.05*.5*.054 =.00135 Pick most probable parse, i.e. take max to combine probabilities of multiple derivations of each constituent in each cell S:.0000216 VP:.5*.5*.054 =.0135 None None NP:.6*.6* .0024 =.000864 NP:.6*.6*.15 =.054 None Det:.6 Nominal:.15 Noun:.5 Nominal: .5*.15*.032 =.0024 None PP:1.0*.2*.16 =.032 Prep:.2 NP:.16 PropNoun:.8 108

Generic PCFG Limitations PCFGs do not rely on specific words or concepts, only general structural disambiguation is possible (e.g. prefer to attach PPs to Nominals) Generic PCFGs cannot resolve syntactic ambiguities that require semantics to resolve, e.g. “ate with”: fork vs. meatballs Smoothing/dealing with out of vocabulary MLE estimates are not always the best