CS 4705 Parsing More Efficiently and Accurately. Review Top-Down vs. Bottom-Up Parsers Left-corner table provides more efficient look- ahead Left recursion.

Slides:



Advertisements
Similar presentations
Basic Parsing with Context-Free Grammars CS 4705 Julia Hirschberg 1 Some slides adapted from Kathy McKeown and Dan Jurafsky.
Advertisements

BİL711 Natural Language Processing1 Problems with CFGs We know that CFGs cannot handle certain things which are available in natural languages. In particular,
Natural Language Processing - Parsing 1 - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment / Binding Bottom vs. Top Down Parsing.
Chapter 9: Parsing with Context-Free Grammars
PARSING WITH CONTEXT-FREE GRAMMARS
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
 Christel Kemke /08 COMP 4060 Natural Language Processing PARSING.
Features & Unification Ling 571 Deep Processing Techniques for NLP January 26, 2011.
CS Basic Parsing with Context-Free Grammars.
Parsing context-free grammars Context-free grammars specify structure, not process. There are many different ways to parse input in accordance with a given.
Parsing with CFG Ling 571 Fei Xia Week 2: 10/4-10/6/05.
Amirkabir University of Technology Computer Engineering Faculty AILAB Efficient Parsing Ahmad Abdollahzadeh Barfouroush Aban 1381 Natural Language Processing.
Earley’s algorithm Earley’s algorithm employs the dynamic programming technique to address the weaknesses of general top-down parsing. Dynamic programming.
CMSC 723 / LING 645: Intro to Computational Linguistics November 10, 2004 Lecture 10 (Dorr): CFG’s (Finish Chapter 9) Parsing (Chapter 10) Prof. Bonnie.
Features and Unification
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
CS 4705 Basic Parsing with Context-Free Grammars.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language Syntax Parsing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
CS 4705 Lecture 11 Feature Structures and Unification Parsing.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
PARSING David Kauchak CS457 – Fall 2011 some slides adapted from Ray Mooney.
1 Features and Unification Chapter 15 October 2012 Lecture #10.
LIN LIN6932: Topics in Computational Linguistics Hana Filip.
1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.
Intro to NLP - J. Eisner1 Earley’s Algorithm (1970) Nice combo of our parsing ideas so far:  no restrictions on the form of the grammar:  A.
1 Syntax Sudeshna Sarkar 25 Aug Sentence-Types Declaratives: A plane left S -> NP VP Imperatives: Leave! S -> VP Yes-No Questions: Did the plane.
1 CPE 480 Natural Language Processing Lecture 5: Parser Asst. Prof. Nuttanart Facundes, Ph.D.
1 CKY and Earley Algorithms Chapter 13 October 2012 Lecture #8.
Chapter 10. Parsing with CFGs From: Chapter 10 of An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, by.
LINGUISTICA GENERALE E COMPUTAZIONALE ANALISI SINTATTICA (PARSING)
10. Parsing with Context-free Grammars -Speech and Language Processing- 발표자 : 정영임 발표일 :
Chapter 13: Parsing with Context-Free Grammars Heshaam Faili University of Tehran.
11 Syntactic Parsing. Produce the correct syntactic parse tree for a sentence.
October 2005csa3180: Parsing Algorithms 11 CSA350: NLP Algorithms Sentence Parsing I The Parsing Problem Parsing as Search Top Down/Bottom Up Parsing Strategies.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
PARSING David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
6/2/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 9 Giuseppe Carenini.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
Natural Language - General
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
Quick Speech Synthesis CMSC Natural Language Processing April 29, 2003.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
csa3050: Parsing Algorithms 11 CSA350: NLP Algorithms Parsing Algorithms 1 Top Down Bottom-Up Left Corner.
Computerlinguistik II / Sprachtechnologie Vorlesung im SS 2010 (M-GSW-10) Prof. Dr. Udo Hahn Lehrstuhl für Computerlinguistik Institut für Germanistische.
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
1/22/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 8 Giuseppe Carenini.
Instructor: Nick Cercone CSEB - 1 Parsing and Context Free Grammars Parsers, Top Down, Bottom Up, Left Corner, Earley.
October 2005CSA3180: Parsing Algorithms 21 CSA3050: NLP Algorithms Parsing Algorithms 2 Problems with DFTD Parser Earley Parsing Algorithm.
Chapter 11: Parsing with Unification Grammars Heshaam Faili University of Tehran.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Speech and Language Processing SLP Chapter 13 Parsing.
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
Basic Parsing with Context Free Grammars Chapter 13
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CS 388: Natural Language Processing: Syntactic Parsing
Natural Language - General
Parsing and More Parsing
CSCI 5832 Natural Language Processing
CSA2050 Introduction to Computational Linguistics
Parsing I: CFGs & the Earley Parser
David Kauchak CS159 – Spring 2019
Presentation transcript:

CS 4705 Parsing More Efficiently and Accurately

Review Top-Down vs. Bottom-Up Parsers Left-corner table provides more efficient look- ahead Left recursion solutions Structural ambiguity…solutions?

Issues for Better Parsing Efficiency Error handling Control strategies Agreement and subcategorization

Inefficient Re- Parsing of Subtrees

Dynamic Programming Create table of solutions to sub-problems (e.g. subtrees) as parse proceeds Look up subtrees for each constituent rather than re-parsing Since all parses implicitly stored, all available for later disambiguation Examples: Cocke-Younger-Kasami (CYK) (1960), Graham-Harrison-Ruzzo (GHR) (1980) and Earley (1970) algorithms

Earley’s Algorithm Uses dynamic programming to do parallel top- down search in (worst case) O(N 3 ) time First, L2R pass fills out a chart with N+1 states (N: the number of words in the input) –Think of chart entries as sitting between words in the input string keeping track of states of the parse at these positions –For each word position, chart contains set of states representing all partial parse trees generated to date. E.g. chart[0] contains all partial parse trees generated at the beginning of the sentenceset of stateschart[0]

Chart entries represent three type of constituents: –predicted constituents (top-down predictions) –in-progress constituents (we’re in the midst of …) –completed constituents (we’ve found …) Progress in parse represented by Dotted Rules –Position of indicates type of constituent – 0 Book 1 that 2 flight 3 S --> VP, [0,0] (predicting VP) NP --> Det Nom, [1,2] (finding NP) VP --> V NP, [0,3] (found VP) –[x,y] tells us where the state begins (x) and where the dot lies (y) wrt the input

S --> VP, [0,0] –First 0 means S constituent begins at the start of the input –Second 0 means the dot here too –So, this is a top-down prediction NP --> Det Nom, [1,2] –the NP begins at position 1 –the dot is at position 2 –so, Det has been successfully parsed –Nom predicted next 0 Book 1 that 2 flight 3

VP --> V NP, [0,3] –Successful VP parse of entire input –Graphical representation

Successful Parse Final answer is found by looking at last entry in chart If entry resembles S -->  [0,N] then input parsed successfully But … note that chart will also contain a record of all possible parses of input string, given the grammar -- not just the successful one(s) –Why is this useful?

Parsing Procedure for the Earley Algorithm Move through each set of states in order, applying one of three operators to each state:set of states –predictor: add top-down predictions to the chart –scanner: read input and add corresponding state to chart –completer: move dot to right when new constituent found Results (new states) added to current or next set of states in chart No backtracking and no states removed: keep complete history of parse –Why is this useful?

Predictor Intuition: new states represent top-down expectations Applied when non part-of-speech non-terminals are to the right of a dot S --> VP [0,0] Adds new states to end of current chart –One new state for each expansion of the non-terminal in the grammar VP --> V [0,0] VP --> V NP [0,0]

Scanner New states for predicted part of speech. Applicable when part of speech is to the right of a dot VP --> V NP [0,0] ‘Book…’ Looks at current word in input If match, adds state(s) to next chart VP --> V NP [0,1] I.e., we’ve found a piece of this constituent!

Completer Intuition: we’ve found a constituent, so tell everyone waiting for this Applied when dot has reached right end of rule NP --> Det Nom [1,3] Find all states w/dot at 1 and expecting an NP VP --> V NP [0,1] Adds new (completed) state(s) to current chart VP --> V NP [0,3]

The Earley Algorithm

Book that flight (Chart [0]) Seed chart with top-down predictions for S from grammar grammar

CFG for Fragment of English PropN  Houston | TWA Prep  from | to | on NP  Det Nom S  VP S  Aux NP VP S  NP VP Nom  N Nom Nom  N Det  that | this | a N  book | flight | meal | money V  book | include | prefer Aux  does VP  V NP VP  V NP  PropN Nom  Nom PP PP  Prep NP

When dummy start state is processed, it’s passed to Predictor, which produces states representing every possible expansion of S, and adds these and every expansion of the left corners of these trees to bottom of Chart[0]PredictorChart[0] When VP --> V, [0,0] is reached, Scanner called, which consults first word of input, Book, and adds first state to Chart[1], VP --> Book, [0,0]ScannerChart[1], Note: When VP --> V NP, [0,0] is reached in Chart[0], Scanner does not need to add VP --> Book, [0,0] again to Chart[1]Chart[1]

V--> book  passed to Completer, which finds 2 states in Chart[0] whose left corner is V and adds them to Chart[1], moving dots to rightCompleterChart[0]

When VP  V  is itself processed by the Completer, S  VP  is added to Chart[1] since VP is a left corner of SChart[1] Last 2 rules in Chart[1] are added by Predictor when VP  V  NP is processedChart[1]Predictor And so on….

How do we retrieve the parses at the end? Augment the Completer to add ptr to prior states it advances as a field in the current state –I.e. what state did we advance here? –Read the ptrs back from the final state

Error Handling What happens when we look at the contents of the last table column and don't find a S -->  rule? –Is it a total loss? No... –Chart contains every constituent and combination of constituents possible for the input given the grammar Also useful for partial parsing or shallow parsing used in information extraction

Alternative Control Strategies Change Earley top-down strategy to bottom-up or... Change to best-first strategy based on the probabilities of constituents –Compute and store probabilities of constituents in the chart as you parse –Then instead of expanding states in fixed order, allow probabilities to control order of expansion

But there are still problems… Several things CFGs don’t handle elegantly: –Agreement (A cat sleeps. Cats sleep.) S  NP VP NP  Det Nom But these rules overgenerate, allowing, e.g., *A cat sleep… –Subcategorization (Cats dream. Cats eat cantaloupe.)

VP  V VP  V NP But these also allow *Cats dream cantaloupe. We need to constrain the grammar rules to enforce e.g. number agreement and subcategorization differences

CFG Solution Encode constraints into the non-terminals –Noun/verb agreement S  SgS S  PlS SgS  SgNP SgVP SgNP  SgDet SgNom –Verb subcat: IntransVP  IntransV TransVP  TransV NP

But this means huge proliferation of rules… An alternative: –View terminals and non-terminals as complex objects with associated features, which take on different values –Write grammar rules whose application is constrained by tests on these features, e.g. S  NP VP (only if the NP and VP agree in number)

Feature Structures Sets of feature-value pairs where: –Features are atomic symbols –Values are atomic symbols or feature structures –Illustrated by attribute-value matrix

Number feature Number-person features Number-person-category features (3sgNP)

Features, Unification and Grammars How do we incorporate feature structures into our grammars? –Assume that constituents are objects which have feature-structures associated with them –Associate sets of unification constraints with grammar rules –Constraints must be satisfied for rule to be satisfied To enforce subject/verb number agreement S  NP VP =

Agreement in English We need to add PERS to our subj/verb agreement constraint This cat likes kibble. S  NP Vp = Do these cats like kibble? S  Aux NP VP =

Det/Nom agreement can be handled similarly These cats This cat NP  Det Nom = And so on …

Verb Subcategorization Recall: Different verbs take different types of argument –Solution: SUBCAT feature, or subcategorization frames e.g. Bill wants George to eat.

But there are many phrasal types and so many types of subcategorization frames, e.g. –believe –believe [VPrep in] [NP ghosts] –believe [NP my mother] –believe [Sfin that I will pass this test] –believe [Swh what I see]... Verbs also subcategorize for subject as well as object types ([ Swh What she wanted] seemed clear.) And other p.o.s. can be seen as subcategorizing for various arguments, such as prepositions, nouns and adjectives (It was clear [Sfin that she was exhausted])

Summing Up Ambiguity, left-recursion, and repeated re-parsing of subtrees present major problems for parsers Solutions: –Combine top-down predictions with bottom-up look- ahead, use dynamic programming  e.g. the Earley algorithm –Feature structures and subcategorization frames help constrain parses but increase parsing complexity Next time: Read Ch 12