Natural Language Processing Lecture 15—10/15/2015 Jim Martin.

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Parsing: computing the grammatical structure of English sentences COMP3310.
Advertisements

March 1, 2009Dr. Muhammed Al_mulhem1 ICS482 Parsing Chapter 13 Muhammed Al-Mulhem March 1, 2009.
Mrach 1, 2009Dr. Muhammed Al-Mulhem1 ICS482 Formal Grammars Chapter 12 Muhammed Al-Mulhem March 1, 2009.
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Approaches to Parsing.
PARSING WITH CONTEXT-FREE GRAMMARS
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
CKY Parsing Ling 571 Deep Processing Techniques for NLP January 12, 2011.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27
Parsing context-free grammars Context-free grammars specify structure, not process. There are many different ways to parse input in accordance with a given.
Albert Gatt LIN3022 Natural Language Processing Lecture 8.
Parsing — Part II (Ambiguity, Top-down parsing, Left-recursion Removal)
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
Prof. Fateman CS 164 Lecture 91 Bottom-Up Parsing Lecture 9.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
Context-Free Grammar CSCI-GA.2590 – Lecture 3 Ralph Grishman NYU.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 29– CYK; Inside Probability; Parse Tree construction) Pushpak Bhattacharyya CSE.
Context Free Grammars Reading: Chap 12-13, Jurafsky & Martin This slide set was adapted from J. Martin and Rada Mihalcea.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
1 Syntax Sudeshna Sarkar 25 Aug Sentence-Types Declaratives: A plane left S -> NP VP Imperatives: Leave! S -> VP Yes-No Questions: Did the plane.
1 CPE 480 Natural Language Processing Lecture 5: Parser Asst. Prof. Nuttanart Facundes, Ph.D.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
1 CKY and Earley Algorithms Chapter 13 October 2012 Lecture #8.
10/12/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 10 Giuseppe Carenini.
LINGUISTICA GENERALE E COMPUTAZIONALE ANALISI SINTATTICA (PARSING)
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars.
October 2005csa3180: Parsing Algorithms 11 CSA350: NLP Algorithms Sentence Parsing I The Parsing Problem Parsing as Search Top Down/Bottom Up Parsing Strategies.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
CPSC 503 Computational Linguistics
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
csa3050: Parsing Algorithms 11 CSA350: NLP Algorithms Parsing Algorithms 1 Top Down Bottom-Up Left Corner.
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Natural Language Processing Lecture 14—10/13/2015 Jim Martin.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Speech and Language Processing Formal Grammars Chapter 12.
Speech and Language Processing SLP Chapter 13 Parsing.
Parsing with Context Free Grammars. Slide 1 Outline Why should you care? Parsing Top-Down Parsing Bottom-Up Parsing Bottom-Up Space (an example) Top -
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CSC 594 Topics in AI – Natural Language Processing
CS60057 Speech &Natural Language Processing
Heng Ji September 13, 2016 SYNATCTIC PARSING Heng Ji September 13, 2016.
Speech and Language Processing
Basic Parsing with Context Free Grammars Chapter 13
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CSC 594 Topics in AI – Natural Language Processing
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CSCI 5832 Natural Language Processing
CSCI 5832 Natural Language Processing
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
Natural Language - General
Parsing and More Parsing
CSCI 5832 Natural Language Processing
Heng Ji January 17, 2019 SYNATCTIC PARSING Heng Ji January 17, 2019.
CSA2050 Introduction to Computational Linguistics
David Kauchak CS159 – Spring 2019
Presentation transcript:

Natural Language Processing Lecture 15—10/15/2015 Jim Martin

1/8/2016 Speech and Language Processing - Jurafsky and Martin 2 Today  Start on Parsing  Parsing frameworks  CKY

1/8/2016 Speech and Language Processing - Jurafsky and Martin 3 Treebanks  Treebanks are corpora in which each sentence has been paired with a parse tree (presumably the right one).  These are generally created 1.By first parsing the collection with an automatic parser 2.And then having human annotators hand correct each parse as necessary.  This generally requires detailed annotation guidelines that provide a POS tagset, a grammar, and instructions for how to deal with particular grammatical constructions.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 4 Penn Treebank  Penn TreeBank is a widely used treebank. Most well known part is the Wall Street Journal section of the Penn TreeBank.  1 M words from the Wall Street Journal. Most well known part is the Wall Street Journal section of the Penn TreeBank.  1 M words from the Wall Street Journal.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 5 Treebank Grammars  Treebanks implicitly define a grammar for the language covered in the treebank.  Simply take the local rules that make up the sub-trees in all the trees in the collection and you have a grammar  The WSJ section gives us about 12k rules if you do this  Not complete, but if you have decent size corpus, you will have a grammar with decent coverage.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 6 Treebank Grammars  Such grammars tend to be very flat due to the fact that they tend to avoid recursion.  To ease the annotators burden, among things  For example, the Penn Treebank has ~4500 different rules for VPs. Among them...

1/8/2016 Speech and Language Processing - Jurafsky and Martin 7 Head Finding  Finding heads in treebank trees is a task that arises frequently in many applications.  As we’ll see it is particularly important in statistical parsing  We can visualize this task by annotating the nodes of a parse tree with the heads of each corresponding node.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 8 Lexically Decorated Tree

1/8/2016 Speech and Language Processing - Jurafsky and Martin 9 Head Finding  Given a tree, the standard way to do head finding is to use a simple set of tree traversal rules specific to each non- terminal in the grammar.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 10 Noun Phrases

1/8/2016 Speech and Language Processing - Jurafsky and Martin 11 Treebank Uses  Treebanks (and head-finding) are particularly critical to the development of statistical parsers  Chapter 14  Also valuable to Corpus Linguistics  Investigating the empirical details of various constructions in a given language

1/8/2016 Speech and Language Processing - Jurafsky and Martin 12 Parsing  Parsing with CFGs refers to the task of assigning proper trees to input strings  Proper here means a tree that covers all and only the elements of the input and has an S at the top  It doesn’t mean that the system can select the correct tree from among all the possible trees

Automatic Syntactic Parse

1/8/2016 Speech and Language Processing - Jurafsky and Martin 14 For Now  Let’s assume…  You have all the words for a sentence already in some buffer  The input is not POS tagged prior to parsing  We won’t worry about morphological analysis  All the words are known  These are all problematic in various ways, and would have to be addressed in real applications.

Search Framework  It’s productive to think about parsing as a form of search…  A search through the space of possible trees given an input sentence and grammar  This framework suggests that heuristic search methods and/or dynamic programming methods might be applicable  It also suggests that notions such as the direction of the search might be useful 1/8/2016 Speech and Language Processing - Jurafsky and Martin 15

1/8/2016 Speech and Language Processing - Jurafsky and Martin 16 Top-Down Search  Since we’re trying to find trees rooted with an S (Sentences), why not start with the rules that give us an S.  Then we can work our way down from there to the words.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 17 Top Down Space

1/8/2016 Speech and Language Processing - Jurafsky and Martin 18 Bottom-Up Parsing  Of course, we also want trees that cover the input words. So we might also start with trees that link up with the words in the right way.  Then work your way up from there to larger and larger trees.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 19 Bottom-Up Search

1/8/2016 Speech and Language Processing - Jurafsky and Martin 20 Bottom-Up Search

1/8/2016 Speech and Language Processing - Jurafsky and Martin 21 Bottom-Up Search

1/8/2016 Speech and Language Processing - Jurafsky and Martin 22 Bottom-Up Search

1/8/2016 Speech and Language Processing - Jurafsky and Martin 23 Bottom-Up Search

1/8/2016 Speech and Language Processing - Jurafsky and Martin 24 Top-Down and Bottom-Up  Top-down  Only searches for trees that can be answers (i.e. S’s)  But also suggests trees that are not consistent with any of the words  Bottom-up  Only forms trees consistent with the words  But suggests trees that make no sense globally

1/8/2016 Speech and Language Processing - Jurafsky and Martin 25 Control  Of course, in both cases we left out how to keep track of the search space and how to make choices  Which node to try to expand next  Which grammar rule to use to expand a node  One approach is called backtracking.  Make a choice, if it works out then fine  If not then back up and make a different choice  Same as with ND-Recognize

1/8/2016 Speech and Language Processing - Jurafsky and Martin 26 Problems  Even with the best filtering, backtracking methods are doomed because of two inter-related problems  Ambiguity and search control (choice)  Shared subproblems

1/8/2016 Speech and Language Processing - Jurafsky and Martin 27 Ambiguity

1/8/2016 Speech and Language Processing - Jurafsky and Martin 28 Shared Sub-Problems  No matter what kind of search (top-down or bottom-up or mixed) that we choose...  We can’t afford to redo work we’ve already done.  Without some help naïve backtracking will lead to such duplicated work.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 29 Shared Sub-Problems  Consider  A flight from Indianapolis to Houston on TWA

1/8/2016 Speech and Language Processing - Jurafsky and Martin 30 Sample L1 Grammar

1/8/2016 Speech and Language Processing - Jurafsky and Martin 31 Shared Sub-Problems  Assume a top-down parse that has already expanded the NP rule (dealing with the Det)  Now its making choices among the various Nominal rules  In particular, between these two  Nominal -> Noun  Nominal -> Nominal PP  Statically choosing the rules in this order leads to the following bad behavior...

1/8/2016 Speech and Language Processing - Jurafsky and Martin 32 Shared Sub-Problems

1/8/2016 Speech and Language Processing - Jurafsky and Martin 33 Shared Sub-Problems

1/8/2016 Speech and Language Processing - Jurafsky and Martin 34 Shared Sub-Problems

1/8/2016 Speech and Language Processing - Jurafsky and Martin 35 Shared Sub-Problems

1/8/2016 Speech and Language Processing - Jurafsky and Martin 36 Dynamic Programming  DP search methods fill tables with partial results and thereby  Avoid doing avoidable repeated work  Solve exponential problems in polynomial time (ok, not really)  Efficiently store ambiguous structures with shared sub- parts.  We’ll cover one approach that corresponds to a bottom-up strategy  CKY

1/8/2016 Speech and Language Processing - Jurafsky and Martin 37 CKY Parsing  First we’ll limit our grammar to epsilon- free, binary rules (more on this later)  Consider the rule A  BC  If there is an A somewhere in the input generated by this rule then there must be a B followed by a C in the input.  If the A spans from i to j in the input then there must be some k st. i<k<j  In other words, the B splits from the C someplace after the i and before the j.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 38 CKY  Let’s build a table so that an A spanning from i to j in the input is placed in cell [i,j] in the table.  So a non-terminal spanning an entire string will sit in cell [0, n]  Hopefully it will be an S  Now we know that the parts of the A must go from i to k and from k to j, for some k

1/8/2016 Speech and Language Processing - Jurafsky and Martin 39 CKY  Meaning that for a rule like A  B C we should look for a B in [i,k] and a C in [k,j].  In other words, if we think there might be an A spanning i,j in the input… AND A  B C is a rule in the grammar THEN  There must be a B in [i,k] and a C in [k,j] for some k such that i<k<j What about the B and the C?

1/8/2016 Speech and Language Processing - Jurafsky and Martin 40 CKY  So to fill the table loop over the cells [i,j] values in some systematic way  Then for each cell, loop over the appropriate k values to search for things to add.  Add all the derivations that are possible for each [i,j] for each k

1/8/2016 Speech and Language Processing - Jurafsky and Martin 41 CKY Table

1/8/2016 Speech and Language Processing - Jurafsky and Martin 42 CKY Algorithm What’s the complexity of this?

1/8/2016 Speech and Language Processing - Jurafsky and Martin 43 Example

1/8/2016 Speech and Language Processing - Jurafsky and Martin 44 Example Filling column 5

Example 1/8/2016 Speech and Language Processing - Jurafsky and Martin 45  Filling column 5 corresponds to processing word 5, which is Houston.  So j is 5.  So i goes from 3 to 0 (3,2,1,0)

1/8/2016 Speech and Language Processing - Jurafsky and Martin 46 Example

1/8/2016 Speech and Language Processing - Jurafsky and Martin 47 Example

1/8/2016 Speech and Language Processing - Jurafsky and Martin 48 Example

1/8/2016 Speech and Language Processing - Jurafsky and Martin 49 Example

 Since there’s an S in [0,5] we have a valid parse.  Are we done? We we sort of left something out of the algorithm 1/8/2016 Speech and Language Processing - Jurafsky and Martin 50

1/8/2016 Speech and Language Processing - Jurafsky and Martin 51 CKY Notes  Since it’s bottom up, CKY hallucinates a lot of silly constituents.  Segments that by themselves are constituents but cannot really occur in the context in which they are being suggested.  To avoid this we can switch to a top-down control strategy  Or we can add some kind of filtering that blocks constituents where they can not happen in a final analysis.

1/8/2016 Speech and Language Processing - Jurafsky and Martin 52 CKY Notes  We arranged the loops to fill the table a column at a time, from left to right, bottom to top.  This assures us that whenever we’re filling a cell, the parts needed to fill it are already in the table (to the left and below)  It’s somewhat natural in that it processes the input a left to right a word at a time  Known as online  Can you think of an alternative strategy?