12/06/1999 JHU CS 600.465/Jan Hajic 1 Introduction to Natural Language Processing (600.465) Statistical Parsing Dr. Jan Hajič CS Dept., Johns Hopkins Univ.

Slides:



Advertisements
Similar presentations
Albert Gatt Corpora and Statistical Methods Lecture 11.
Advertisements

Chapter 4 Syntax.
Probabilistic and Lexicalized Parsing CS Probabilistic CFGs: PCFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations.
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Approaches to Parsing.
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
1 Statistical NLP: Lecture 12 Probabilistic Context Free Grammars.
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
Probabilistic Parsing: Enhancements Ling 571 Deep Processing Techniques for NLP January 26, 2011.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
Parsing with CFG Ling 571 Fei Xia Week 2: 10/4-10/6/05.
Stemming, tagging and chunking Text analysis short of parsing.
DS-to-PS conversion Fei Xia University of Washington July 29,
Parsing with PCFG Ling 571 Fei Xia Week 3: 10/11-10/13/05.
1/13 Parsing III Probabilistic Parsing and Conclusions.
Features and Unification
Introduction to Syntax, with Part-of-Speech Tagging Owen Rambow September 17 & 19.
Fall 2001 EE669: Natural Language Processing 1 Lecture 13: Probabilistic CFGs (Chapter 11 of Manning and Schutze) Wen-Hsiang Lu ( 盧文祥 ) Department of Computer.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Probabilistic Parsing Ling 571 Fei Xia Week 5: 10/25-10/27/05.
SI485i : NLP Set 9 Advanced PCFGs Some slides from Chris Manning.
Robert Hass CIS 630 April 14, 2010 NP NP↓ Super NP tagging JJ ↓
1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.
BİL711 Natural Language Processing1 Statistical Parse Disambiguation Problem: –How do we disambiguate among a set of parses of a given sentence? –We want.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
11/22/1999 JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Shift-Reduce Parsing in Detail Dr. Jan Hajič CS Dept., Johns.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
IV. SYNTAX. 1.1 What is syntax? Syntax is the study of how sentences are structured, or in other words, it tries to state what words can be combined with.
SI485i : NLP Set 8 PCFGs and the CKY Algorithm. PCFGs We saw how CFGs can model English (sort of) Probabilistic CFGs put weights on the production rules.
12/08/1999 JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Statistical Translation: Alignment and Parameter Estimation.
1 Semi-Supervised Approaches for Learning to Parse Natural Languages Rebecca Hwa
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
A Cascaded Finite-State Parser for German Michael Schiehlen Institut für Maschinelle Sprachverarbeitung Universität Stuttgart
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Page 1 Probabilistic Parsing and Treebanks L545 Spring 2000.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Probabilistic CKY Roger Levy [thanks to Jason Eisner]
9/22/1999 JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) LM Smoothing (The EM Algorithm) Dr. Jan Hajič CS Dept., Johns.
CSA2050 Introduction to Computational Linguistics Parsing I.
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
Supertagging CMSC Natural Language Processing January 31, 2006.
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Natural Language Processing Lecture 14—10/13/2015 Jim Martin.
December 2011CSA3202: PCFGs1 CSA3202: Human Language Technology Probabilistic Phrase Structure Grammars (PCFGs)
1 Semi-Supervised Approaches for Learning to Parse Natural Languages Slides are from Rebecca Hwa, Ray Mooney.
NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 25– Probabilistic Parsing) Pushpak Bhattacharyya CSE Dept., IIT Bombay 14 th March,
Natural Language Processing : Probabilistic Context Free Grammars Updated 8/07.
10/31/00 1 Introduction to Cognitive Science Linguistics Component Topic: Formal Grammars: Generating and Parsing Lecturer: Dr Bodomo.
Chapter 12: Probabilistic Parsing and Treebanks Heshaam Faili University of Tehran.
Roadmap Probabilistic CFGs –Handling ambiguity – more likely analyses –Adding probabilities Grammar Parsing: probabilistic CYK Learning probabilities:
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
An Introduction to the Government and Binding Theory
Basic Parsing with Context Free Grammars Chapter 13
Probabilistic and Lexicalized Parsing
LING/C SC 581: Advanced Computational Linguistics
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
Parsing and More Parsing
Constraining Chart Parsing with Partial Tree Bracketing
CSA2050 Introduction to Computational Linguistics
Parsing I: CFGs & the Earley Parser
Johns Hopkins 2003 Summer Workshop on Syntax and Statistical Machine Translation Chapters 5-8 Ethan Phelps-Goodman.
CPSC 503 Computational Linguistics
David Kauchak CS159 – Spring 2019
Presentation transcript:

12/06/1999 JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Statistical Parsing Dr. Jan Hajič CS Dept., Johns Hopkins Univ.

12/06/1999JHU CS / Intro to NLP/Jan Hajic2 Language Model vs. Parsing Model Language model: –interested in string probability: P(W) = probability definition using a formula such as =  i=1..n p(w i |w i-2,w i-1 ) trigram language model =  s  S p(W,s) =  s  S  r  s r PCFG; r ~ rule used in parse tree Parsing model –conditional probability of tree given string: P(s|W) = P(W,s) / P(W) = P(s) / P(W) !! P(W,s) = P(s) !! –for argmax, just use P(s) (P(W) is constant)

12/06/1999JHU CS / Intro to NLP/Jan Hajic3 Once again, Lexicalization Lexicalized parse tree (~ dependency tree+phrase labels) Ex. subtree: Pre-terminals (above leaves): assign the word below Recursive step (step up one level): (a) select node, (b) copy word up. PP(with) PREP(with) N(telescope) with a_telescope

12/06/1999JHU CS / Intro to NLP/Jan Hajic4 Lexicalized Tree Example #1 S  NP VP #2 VP  V NP PP #3 VP  V NP #4 NP  N #5 NP  N PP #6 PP  PREP N #7 N  a_dog #8 N  a_cat #9 N  a_telescope #10 V  saw #11 PREP  with a_dog saw a_cat with a_telescope N V N PREP N NP(a_dog)NP(a_cat)PP(with) VP(saw) S(saw) VP(saw) NP(a_cat) PP(with) V N(a_cat) PREP N

12/06/1999JHU CS / Intro to NLP/Jan Hajic5 Using POS Tags a_dog saw a_cat with a_telescope N V N PREP N NP(a_dog,N) NP(a_cat,N)PP(with,PREP) VP(saw,V) S(saw,V) Head ~ word,tag

12/06/1999JHU CS / Intro to NLP/Jan Hajic6 Conditioning Original PCFG: P(  B  D ...|A) –No “ lexical ” units (words) Introducing words: P(  B(head B )  D(head D ) ... |A(head A )) where head A is one of the heads on the left E.g. rule VP(saw)  V(saw) NP(a_cat): P(V(saw) NP(a_cat) | VP(saw))

12/06/1999JHU CS / Intro to NLP/Jan Hajic7 Independence Assumptions Too many rules Decompose: P(  B(head B )  D(head D ) ... |A(head A )) = In general (total independence): P(  |A(head A ))  P(B(head B )|A(head A ))   P(  |A(head A )) Too much independent: need a compromise.

12/06/1999JHU CS / Intro to NLP/Jan Hajic8 The Decomposition Order does not matter; let ’ s use intuition ( “ linguistics ” ): Select the head daughter category: P H (H(head A )|A(head A )) Select everything to the right: P R (R i (r i ) | A(head A ),H) Also, choose when to finish: R m+1 (r m+1 ) = STOP Similarly, for the left direction: P L (L i (l i ) | A(head A ),H) H(head) A(head) H(head) A(head) R 1 (head 1 )R 2 (head 2 ) STOP

12/06/1999JHU CS / Intro to NLP/Jan Hajic9 Example Decomposition Order: Example: H(head) A(head) R 1 (head 1 )R 2 (head 2 ) STOPL 1 (head 1 )STOP 1 23 V(saw) VP(saw) NP(a_cat)PP(with)STOP

12/06/1999JHU CS / Intro to NLP/Jan Hajic10 More Conditioning: Distance Motivation: –close words tend to be dependents (or phrases) more likely –ex.: walking on a sidewalk on a sunny day without looking on... Number of words too detailed, though: –use more sophisticated (yet more robust) distance measure d r/l : distinguish 0 and non-zero distance (2) distinguish if verb is in-between the head and the constituent in question (2) distinguish if there are commas in-between: 0, 1, 2, >2 commas (4)....total: 16 possibilities added to the condition: P R (R i (r i ) | A(head A ),H,d r ) same to the left: P L (L i (l i ) | A(head A ),H,d l )

12/06/1999JHU CS / Intro to NLP/Jan Hajic11 More Conditioning: Complement/Adjunct So far: no distinction...but: time NP  subject NP also, Subject NP cannot repeat... useful during parsing [Must be added in training data] VP(saw) NP(a_dog)NP(yesterday) VP(saw) NP-C(a_dog)NP(yesterday)

12/06/1999JHU CS / Intro to NLP/Jan Hajic12 More Conditioning: Subcategorization The problem still not solved: –two subjects: wrong! Need: relation among complements. –[linguistic observation: adjuncts can repeat freely.] Introduce: –Left & Right Subcategorization Frames (multisets) S(was) VP(was)NP-C(the 7th-best)NP-C(Johns Hopkins)

12/06/1999JHU CS / Intro to NLP/Jan Hajic13 Inserting Subcategorization Use head probability as before: P H (H(head A )|A(head A )) Then, add left & right subcat frame: P lc (LC| A(head A ),H), P rc (RC| A(head A ),H) –LC, RC: list (multiset) of phrase labels (not words) Add them to context condition: (left) P L (L i (l i ) | A(head A ),H,d l,LC) [right: similar] LC/RC: “ dynamic ” : remove labels when generated –P(STOP|.....,LC) = 0 if LC non-empty

12/06/1999JHU CS / Intro to NLP/Jan Hajic14 Smoothing Adding conditions... ~ adding parameters Sparse data problem as usual (head ~ !) Smooth (step-wise): –P smooth-H (H(head A )|A(head A )) =  1 P H (H(head A )|A(head A )) + (1- 1 )P smooth-H (H(head A )|A(tag A )) –P smooth-H (H(head A )|A(tag A )) =  2 P H (H(head A )|A(tag A )) + (1- 2 )P H (H(head A )|A) Similarly, for P R and P L.

12/06/1999JHU CS / Intro to NLP/Jan Hajic15 The Parsing Algorithm for a Lexicalized PCFG Bottom-up Chart parsing –Elements of a chart: a pair span  score  –Total probability = multiplying elementary probabilities  enables dynamic programming: discard chart element with the same span but lower score. “ Score ” computation: –joining chart elements: [for 2]:,, : P(e new ) = p 1  p 2 ...  p n  P H (...)   P R (...)   P L (...);

12/06/1999JHU CS / Intro to NLP/Jan Hajic16 Results English, WSJ, Penn Treebank, 40k sentences < 40Words < 100 Words –Labeled Recall: 88.1% 87.5% –Labeled Precision: 88.6% 88.1% –Crossing Brackets (avg): –Sentences With 0 CBs: 66.4% 63.9% Czech, Prague Dependency Treebank, 13k sentences: –Dependency Accuracy overall: 80.0% (~ unlabelled precision/recall)