Parsing V LR(1) Parsers. LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle recognition.

Slides:



Advertisements
Similar presentations
Parsing V: Bottom-up Parsing
Advertisements

A question from last class: construct the predictive parsing table for this grammar: S->i E t S e S | i E t S | a E -> B.
Compiler Theory – 08/10(ppt) By Anindhya Sankhla 11CS30004 Group : G 29.
6/12/2015Prof. Hilfinger CS164 Lecture 111 Bottom-Up Parsing Lecture (From slides by G. Necula & R. Bodik)
By Neng-Fa Zhou Syntax Analysis lexical analyzer syntax analyzer semantic analyzer source program tokens parse tree parser tree.
1 CMPSC 160 Translation of Programming Languages Fall 2002 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #10 Parsing.
Parsing VI The LR(1) Table Construction. LR(k) items The LR(1) table construction algorithm uses LR(1) items to represent valid configurations of an LR(1)
Parsing III (Eliminating left recursion, recursive descent parsing)
Parsing VII The Last Parsing Lecture Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at.
Parsing V Introduction to LR(1) Parsers. from Cooper & Torczon2 LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited.
Parsing — Part II (Ambiguity, Top-down parsing, Left-recursion Removal)
Prof. Fateman CS 164 Lecture 91 Bottom-Up Parsing Lecture 9.
CS 330 Programming Languages 09 / 23 / 2008 Instructor: Michael Eckmann.
Table-driven parsing Parsing performed by a finite state machine. Parsing algorithm is language-independent. FSM driven by table (s) generated automatically.
1 CIS 461 Compiler Design & Construction Fall 2012 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #12 Parsing 4.
1 Bottom-up parsing Goal of parser : build a derivation –top-down parser : build a derivation by working from the start symbol towards the input. builds.
– 1 – CSCE 531 Spring 2006 Lecture 8 Bottom Up Parsing Topics Overview Bottom-Up Parsing Handles Shift-reduce parsing Operator precedence parsing Readings:
Shift/Reduce and LR(1) Professor Yihjia Tsai Tamkang University.
Bottom-up parsing Goal of parser : build a derivation
Parsing IV Bottom-up Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Parsing IV Bottom-up Parsing. Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves.
LR(1) Parsers Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit.
Syntax and Semantics Structure of programming languages.
Parsing. Goals of Parsing Check the input for syntactic accuracy Return appropriate error messages Recover if possible Produce, or at least traverse,
410/510 1 of 21 Week 2 – Lecture 1 Bottom Up (Shift reduce, LR parsing) SLR, LR(0) parsing SLR parsing table Compiler Construction.
Introduction to Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Parsing III (Top-down parsing: recursive descent & LL(1) )
CS 321 Programming Languages and Compilers Bottom Up Parsing.
Review 1.Lexical Analysis 2.Syntax Analysis 3.Semantic Analysis 4.Code Generation 5.Code Optimization.
Syntax and Semantics Structure of programming languages.
Top Down Parsing - Part I Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Prof. Necula CS 164 Lecture 8-91 Bottom-Up Parsing LR Parsing. Parser Generators. Lecture 6.
Syntax Analysis - LR(0) Parsing Compiler Design Lecture (02/04/98) Computer Science Rensselaer Polytechnic.
Bottom-up Parsing, Part I Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Parsing — Part II (Top-down parsing, left-recursion removal) Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students.
Top-down Parsing lecture slides from C OMP 412 Rice University Houston, Texas, Fall 2001.
Parsing V LR(1) Parsers C OMP 412 Rice University Houston, Texas Fall 2001 Copyright 2000, Keith D. Cooper, Ken Kennedy, & Linda Torczon, all rights reserved.
Top-down Parsing Recursive Descent & LL(1) Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412.
Top-Down Parsing CS 671 January 29, CS 671 – Spring Where Are We? Source code: if (b==0) a = “Hi”; Token Stream: if (b == 0) a = “Hi”; Abstract.
Top-down Parsing. 2 Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production.
10/10/2002© 2002 Hal Perkins & UW CSED-1 CSE 582 – Compilers LR Parsing Hal Perkins Autumn 2002.
CS 330 Programming Languages 09 / 25 / 2007 Instructor: Michael Eckmann.
Parsing VII The Last Parsing Lecture. High-level overview  Build the canonical collection of sets of LR(1) Items, I aBegin in an appropriate state, s.
Parsing V LR(1) Parsers N.B.: This lecture uses a left-recursive version of the SheepNoise grammar. The book uses a right-recursive version. The derivations.
Parsing III (Top-down parsing: recursive descent & LL(1) )
Bottom Up Parsing CS 671 January 31, CS 671 – Spring Where Are We? Finished Top-Down Parsing Starting Bottom-Up Parsing Lexical Analysis.
Bottom-Up Parsing Algorithms LR(k) parsing L: scan input Left to right R: produce Rightmost derivation k tokens of lookahead LR(0) zero tokens of look-ahead.
Lecture 5: LR Parsing CS 540 George Mason University.
Compilers: Bottom-up/6 1 Compiler Structures Objective – –describe bottom-up (LR) parsing using shift- reduce and parse tables – –explain how LR.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture Ahmed Ezzat.
1 Chapter 6 Bottom-Up Parsing. 2 Bottom-up Parsing A bottom-up parsing corresponds to the construction of a parse tree for an input tokens beginning at.
Conflicts in Simple LR parsers A SLR Parser does not use any lookahead The SLR parsing method fails if knowing the stack’s top state and next input token.
Introduction to Parsing
Programming Languages Translator
LR(1) Parsers Comp 412 COMP 412 FALL 2010
Parsing IV Bottom-up Parsing
Table-driven parsing Parsing performed by a finite state machine.
Parsing VI The LR(1) Table Construction
Syntax Analysis Part II
N.B.: This lecture uses a left-recursive version of the SheepNoise grammar. The book uses a right-recursive version. The derivations (& the tables) are.
Subject Name:COMPILER DESIGN Subject Code:10CS63
Top-Down Parsing CS 671 January 29, 2008.
Lecture 8 Bottom Up Parsing
Parsing IV Bottom-up Parsing
Compiler Construction
Compiler Construction
Compiler Construction
Lecture 11 LR Parse Table Construction
Lecture 11 LR Parse Table Construction
Presentation transcript:

Parsing V LR(1) Parsers

LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle recognition LR(1) parsers recognize languages that have an LR(1 ) grammar Informal definition: A grammar is LR(1) if, given a rightmost derivation S   0   1   2  …   n–1   n  sentence We can 1. isolate the handle of each right-sentential form  i, and 2. determine the production by which to reduce, by scanning  i from left-to-right, going at most 1 symbol beyond the right end of the handle of  i

LR(1) Parsers A table-driven LR(1) parser looks like Tables can be built by hand However, this is a perfect task to automate Scanner Table-driven Parser A CTION & G OTO Tables Parser Generator source code grammar IR

LR(1) Skeleton Parser stack.push( INVALID ); stack.push(s 0 ); not_found = true; token = scanner.next_token(); do while (not_found) { s = stack.top(); if ( ACTION[s,token] == “reduce A  ” ) then { stack.popnum(2*|  |); // pop 2*|  | symbols s = stack.top(); stack.push(A); stack.push(GOTO[s,A]); } else if ( ACTION[s,token] == “shift s i ” ) then { stack.push(token); stack.push(s i ); token  scanner.next_token(); } else if ( ACTION[s,token] == “accept” & token == EOF ) ‏ then not_found = false; else report a syntax error and recover; } report success; The skeleton parser uses ACTION & GOTO tables does |words| shifts does |derivation| reductions does 1 accept detects errors by failure of 3 other cases

To make a parser for L(G), need a set of tables The grammar The tables LR(1) Parsers (parse tables) ‏ Remember, this is the left-recursive SheepNoise; EaC shows the right- recursive version.

Example Parse 1 The string “baa”

Example Parse 1 The string “baa”

Example Parse 1 The string “baa”

Example Parse 1 The string “baa”

Example Parse 2 The string “baa baa ”

Example Parse 2 The string “baa baa ”

Example Parse 2 The string “baa baa ”

Example Parse 2 The string “baa baa ”

LR(1) Parsers How does this LR(1) stuff work? Unambiguous grammar  unique rightmost derivation Keep upper fringe on a stack  All active handles include top of stack (TOS) ‏  Shift inputs until TOS is right end of a handle Language of handles is regular (finite) ‏  Build a handle-recognizing DFA  A CTION & G OTO tables encode the DFA To match subterm, invoke subterm DFA & leave old DFA ’s state on stack Final state in DFA  a reduce action  New state is G OTO[state at TOS (after pop), lhs]  For SN, this takes the DFA to s 1 S0S0 S3S3 S2S2 S1S1 baa SN Control DFA for SN Reduce action

Building LR(1) Parsers How do we generate the A CTION and G OTO tables? Use the grammar to build a model of the DFA Use the model to build A CTION & G OTO tables If construction succeeds, the grammar is LR(1) ‏ The Big Picture Model the state of the parser Use two functions goto( s, X ) and closure( s ) ‏  goto() is analogous to Delta() in the subset construction  closure() adds information to round out a state Build up the states and transition functions of the DFA Use this information to fill in the A CTION and G OTO tables Terminal or non-terminal

LR(k) items The LR(1) table construction algorithm uses LR(1) items to represent valid configurations of an LR(1) parser An LR(k) item is a pair [ P,  ], where P is a production A  with a at some position in the rhs  is a lookahead string of length ≤ k (words or EOF ) ‏ The in an item indicates the position of the top of the stack [ A  ,a] means that the input seen so far is consistent with the use of A  immediately after the symbol on top of the stack, i.e., this item is a “possibility” [A  ,a] means that the input sees so far is consistent with the use of A  at this point in the parse, and that the parser has already recognized , i.e., this item is “partially complete” [A ,a] means that the parser has seen , and that a lookahead symbol of a is consistent with reducing to A, i.e., this item is “complete”

LR(1) Items The production A  , where  = B 1 B 1 B 1 with lookahead a, can give rise to 4 items [A B 1 B 2 B 3,a], [A  B 1B 2 B 3,a], [A  B 1 B 2B 3,a], & [A  B 1 B 2 B 3,a] The set of LR(1) items for a grammar is finite What’s the point of all these lookahead symbols? Carry them along to choose the correct reduction, if there is a choice Lookaheads are bookkeeping, unless item has at right end  Has no direct use in [A   ,a]  In [A  ,a], a lookahead of a implies a reduction by A    For { [A  ,a],[B   ,b] }, a  reduce to A; F IRST (  )  shift  Limited right context is enough to pick the actions

High-level overview  Build the canonical collection of sets of LR(1) Items, I aBegin in an appropriate state, s 0  [S’  S, EOF ], along with any equivalent items  Derive equivalent items as closure( s 0 ) ‏ bRepeatedly compute, for each s k, and each X, goto(s k,X) ‏  If the set is not already in the collection, add it  Record all the transitions created by goto( ) ‏ This eventually reaches a fixed point 2Fill in the table from the collection of sets of LR(1) items The canonical collection completely encodes the transition diagram for the handle-finding DFA LR(1) Table Construction

Back to Finding Handles Revisiting an issue from last class Parser in a state where the stack (the fringe) was Expr – Term With lookahead of * How did it choose to expand Term rather than reduce to Expr? Lookahead symbol is the key With lookahead of + or –, parser should reduce to Expr With lookahead of * or /, parser should shift Parser uses lookahead to decide All this context from the grammar is encoded in the handle recognizing mechanism

Back to x – 2 * y 1. Shift until TOS is the right end of a handle 2. Find the left end of the handle & reduce Remember this slide from last lecture? shift here reduce here

Computing F IRST Sets Define F IRST as If   * a , a  T,   (T  NT)*, then a  F IRST (  ) ‏ If   * , then   F IRST (  ) ‏ Note: if  = X , F IRST (  ) = F IRST ( X) ‏ To compute F IRST Use a fixed-point method F IRST (A)  2 (T   ) ‏ Loop is monotonic  Algorithm halts

Computing Closures Closure(s) adds all the items implied by items already in s Any item [A   B ,a] implies [B  ,x] for each production with B on the lhs, and each x  F IRST (  a) ‏ Since  B  is valid, any way to derive  B  is valid, too The algorithm Closure( s ) ‏ while ( s is still changing ) ‏  items [A  B ,a]  s  productions B    P  b  F IRST (  a) //  might be  if [B  ,b]  s then add [B  ,b] to s  Classic fixed-point method  Halts because s  I TEMS  Worklist version is faster Closure “fills out” a state

Example From SheepNoise Initial step builds the item [Goal  SheepNoise, EOF ] and takes its closure( ) ‏ Closure( [Goal  SheepNoise, EOF ] ) ‏ So, S 0 is { [Goal  SheepNoise,EOF], [SheepNoise  SheepNoise baa,EOF], [SheepNoise  baa,EOF], [SheepNoise  SheepNoise baa,baa], [SheepNoise  baa,baa] }

Computing Gotos Goto(s,x) computes the state that the parser would reach if it recognized an x while in state s Goto( { [A   X ,a] }, X ) produces [A   X ,a] (obviously) ‏ It also includes closure( [A   X ,a] ) to fill out the state The algorithm Goto( s, X ) ‏ new  Ø  items [A X ,a]  s new  new  [A  X ,a] return closure(new) ‏  Straightforward computation  Uses closure ( ) ‏ Goto() moves forward

Example from SheepNoise S 0 is { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  baa, EOF ], [SheepNoise  SheepNoise baa,baa], [SheepNoise  baa,baa] } Goto( S 0, baa ) ‏ Loop produces Closure adds nothing since is at end of rhs in each item In the construction, this produces s 2 { [SheepNoise  baa, { EOF,baa}]} New, but obvious, notation for two distinct items [SheepNoise  baa, EOF] & [SheepNoise  baa, baa]

Example from SheepNoise S 0 : { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  baa, EOF ], [SheepNoise  SheepNoise baa, baa], [SheepNoise  baa, baa] } S 1 = Goto(S 0, SheepNoise ) = { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise  baa, EOF ], [SheepNoise  baa, baa] } S 3 = Goto(S 1, baa ) = { [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] }

Building the Canonical Collection Start from s 0 = closure( [S’  S, EOF ] ) ‏ Repeatedly construct new states, until all are found The algorithm s 0  closure ( [S’  S,EOF] ) ‏ S  { s 0 } k  1 while ( S is still changing ) ‏  s j  S and  x  ( T  NT ) ‏ s k  goto(s j,x) ‏ record s j  s k on x if s k  S then S  S  s k k  k + 1  Fixed-point computation  Loop adds to S  S  2 ITEMS, so S is finite Worklist version is faster

Example from SheepNoise Starts with S 0 S 0 : { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  baa, EOF ], [SheepNoise  SheepNoise baa, baa], [SheepNoise  baa, baa] }

Example from SheepNoise Starts with S 0 S 0 : { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  baa, EOF ], [SheepNoise  SheepNoise baa, baa], [SheepNoise  baa, baa] } Iteration 1 computes S 1 = Goto(S 0, SheepNoise ) = { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise  baa, EOF ], [SheepNoise  baa, baa] }

Example from SheepNoise Starts with S 0 S 0 : { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  baa, EOF ], [SheepNoise  SheepNoise baa, baa], [SheepNoise  baa, baa] } Iteration 1 computes S 1 = Goto(S 0, SheepNoise ) = { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise  baa, EOF ], [SheepNoise  baa, baa] } Iteration 2 computes S 3 = Goto(S 1, baa ) = { [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] }

Example from SheepNoise Starts with S 0 S 0 : { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  baa, EOF ], [SheepNoise  SheepNoise baa, baa], [SheepNoise  baa, baa] } Iteration 1 computes S 1 = Goto(S 0, SheepNoise ) = { [Goal  SheepNoise, EOF ], [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise  baa, EOF ], [SheepNoise  baa, baa] } Iteration 2 computes S 3 = Goto(S 1, baa ) = { [SheepNoise  SheepNoise baa, EOF ], [SheepNoise  SheepNoise baa, baa] } Nothing more to compute, since is at the end of every item in S 3.