Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bottom Up Parsing.

Similar presentations


Presentation on theme: "Bottom Up Parsing."— Presentation transcript:

1 Bottom Up Parsing

2 Bottom-Up Parsing Bottom-up parsing is more general than top-down parsing And just as efficient Builds on ideas in top-down parsing Preferred method in practice Also called LR parsing L means that tokens are read left to right R means that it constructs a rightmost derivation in reverse!

3 An Introductory Example
LR parsers don’t need left-factored grammars and can also handle left-recursive grammars Consider the following grammar: E  E + ( E ) | int Why is this not LL(1)? Consider the string: int + ( int ) + ( int )

4 The Idea LR parsing reduces a string to the start symbol by inverting productions: str à input string of terminals repeat Identify b in str such that A ! b is a production (i.e., str = a b g) Replace b by A in str (i.e., str à a A g) until str = S

5 A Bottom-up Parse in Detail (1)
int + (int) + (int) int + ( int ) + ( int )

6 A Bottom-up Parse in Detail (2)
int + (int) + (int) E + (int) + (int) E int + ( int ) + ( int )

7 A Bottom-up Parse in Detail (3)
int + (int) + (int) E + (int) + (int) E + (E) + (int) E E int + ( int ) + ( int )

8 A Bottom-up Parse in Detail (4)
int + (int) + (int) E + (int) + (int) E + (E) + (int) E + (int) E E E int + ( int ) + ( int )

9 A Bottom-up Parse in Detail (5)
int + (int) + (int) E + (int) + (int) E + (E) + (int) E + (int) E + (E) E E E E int + ( int ) + ( int )

10 A Bottom-up Parse in Detail (6)
int + (int) + (int) E + (int) + (int) E + (E) + (int) E + (int) E + (E) E E A rightmost derivation in reverse E E E int + ( int ) + ( int )

11 An LR parser traces a rightmost derivation in reverse
Important Fact #1 Important Fact #1 about bottom-up parsing: An LR parser traces a rightmost derivation in reverse

12 Where Do Reductions Happen
Important Fact #1 has an interesting consequence: Let g be a step of a bottom-up parse Assume the next reduction is by A  Then g is a string of terminals Why? Because Ag  g is a step in a right-most derivation

13 Notation Idea: Split string into two substrings
Right substring is as yet unexamined by parsing (a string of terminals) Left substring has terminals and non-terminals The dividing point is marked by a I The I is not part of the string Initially, all input is unexamined: Ix1x xn

14 Shift-Reduce Parsing Bottom-up parsing uses only two kinds of actions:

15 Shift Shift: Move I one place to the right
Shifts a terminal to the left string E + (I int )  E + (int I )

16 Reduce Reduce: Apply an inverse production at the right end of the left string If E  E + ( E ) is a production, then E + (E + ( E ) I )  E +(E I )

17 Shift-Reduce Example int + ( int ) + ( int )
I int + (int) + (int)$ shift int + ( int ) + ( int )

18 Shift-Reduce Example int + ( int ) + ( int )
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int int + ( int ) + ( int )

19 Shift-Reduce Example E int + ( int ) + ( int )
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E int + ( int ) + ( int )

20 Shift-Reduce Example E int + ( int ) + ( int )
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E int + ( int ) + ( int )

21 Shift-Reduce Example E E int + ( int ) + ( int )
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E E int + ( int ) + ( int )

22 Shift-Reduce Example int I + (int) + (int)$ red. E ! int
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E + (E) I + (int)$ red. E ! E + (E) E E int + ( int ) + ( int )

23 Shift-Reduce Example int I + (int) + (int)$ red. E ! int
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E + (E) I + (int)$ red. E ! E + (E) E I + (int)$ shift 3 times E E E int + ( int ) + ( int )

24 Shift-Reduce Example int I + (int) + (int)$ red. E ! int
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E + (E) I + (int)$ red. E ! E + (E) E I + (int)$ shift 3 times E + (int I )$ red. E ! int E E E int + ( int ) + ( int )

25 Shift-Reduce Example int I + (int) + (int)$ red. E ! int
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E + (E) I + (int)$ red. E ! E + (E) E I + (int)$ shift 3 times E + (int I )$ red. E ! int E + (E I )$ shift E E E E int + ( int ) + ( int )

26 Shift-Reduce Example int I + (int) + (int)$ red. E ! int
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E + (E) I + (int)$ red. E ! E + (E) E I + (int)$ shift 3 times E + (int I )$ red. E ! int E + (E I )$ shift E + (E) I $ red. E ! E + (E) E E E E int + ( int ) + ( int )

27 Shift-Reduce Example int I + (int) + (int)$ red. E ! int
I int + (int) + (int)$ shift int I + (int) + (int)$ red. E ! int E I + (int) + (int)$ shift 3 times E + (int I ) + (int)$ red. E ! int E + (E I ) + (int)$ shift E + (E) I + (int)$ red. E ! E + (E) E I + (int)$ shift 3 times E + (int I )$ red. E ! int E + (E I )$ shift E + (E) I $ red. E ! E + (E) E I $ accept E E E E E int + ( int ) + ( int )

28 The Stack Left string can be implemented by a stack
Top of the stack is the I Shift pushes a terminal on the stack Reduce pops 0 or more symbols off the stack (production rhs) and pushes a non-terminal on the stack (production lhs)

29 Key Issue: When to Shift or Reduce ?
Idea: use a finite automaton (FA) to decide when to shift or reduce The input is the stack The language consists of terminals and non-terminals We run the FA on the stack and we examine the resulting state X and the token tok after I If X has a transition labeled tok then shift If X is labeled with “A ! b on tok” then reduce

30 LR(1) Parsing. An Example
0I int + (int) + (int)$ shift 0int 1I + (int) + (int)$ E ! int 0E 2I + (int) + (int)$ shift3 0E 2+ 3(4 int5 I ) + (int)$ E ! int 0E2 + 3(4 E6 I ) + (int)$ shift 0E2 +3(4 E6) I + (int)$ E ! E+(E) 0E2 I + (int)$ shift3 0E2 +3(4 int5 I )$ E ! int 0E2 + 3(4 E6 I )$ shift 0E2+ 3(4 E6)7 I $ E ! E+(E) 0E2 I $ accept int E ! int on $, + accept on $ on ), + E ! E + (E) ( + E 10 9 11 1 2 3 4 5 6 8 7 )

31 Representing the FA Parsers represent the Deterministic FA as a 2D table Recall table-driven lexical analysis Lines correspond to DFA states Columns correspond to terminals and non-terminals Typically columns are split into: Those for terminals: action table Those for non-terminals: goto table

32 Representing the DFA. Example
The table for a fragment of our DFA: ( int + ( ) $ E 3 s4 4 s5 g6 5 rE! int 6 s8 s7 7 rE! E+(E) 3 4 int E 5 6 E ! int on ), + ) 7 E ! E + (E) on $, +

33 The LR Parsing Algorithm
After a shift or reduce action we rerun the DFA on the entire stack This is wasteful, since most of the work is repeated Remember for each stack element on which state it brings the DFA LR parser maintains a stack á sym1, state1 ñ á symn, staten ñ statek is the final state of the DFA on sym1 … symk

34 The LR Parsing Algorithm
Let w be initial input Let j = 0 Let DFA state 0 be the start state Let stack = á dummy, 0 ñ repeat case action[topState(stack), w[j] ] of shift k: push á w[j++], k ñ reduce X ® a: pop | a | pairs, push áX, Goto[topState(stack), X]ñ accept: halt normally error: halt and report error

35 LR Parsing Notes Can be used to parse more grammars than LL
Most programming language grammars are LR Can be described as a simple table There are tools for building the table How is the table constructed?

36 LR Parsing and Parser Generators

37 Outline Implementing a Shift-reduce Parser Synthesize a DFA
Captures all the possible states that the parser can be in during state transitions for terminals and non-terminals DFA for LR(0) Parser DFA for SLR Parser DFA for LR(1) Parser DFA for LALR(1) Parser Use DFA to create a parse table Using parser generators

38 Key Issue: How is the DFA Constructed?
The stack describes the context of the parse What non-terminal we are looking for What production rhs we are looking for What we have seen so far from the rhs => LR(0) item Each DFA state describes several such contexts E.g., when we are looking for non-terminal E, we might be looking either for an int or a E + (E) rhs => LR(0) item set

39 LR Example The grammar S  A $ (1) A  (A) (2) A  ( ) (3)

40 DFA States Based on LR(0) Items
We need to capture how much of a given production we have scanned so far A  ( A ) Are we here? Or here?

41 LR(0) Items We need to capture how much of a given production we have scanned so far Production Generates 4 items A  • (A ) A  ( • A ) A  (A • ) A  (A ) • A  ( A )

42 Example of LR(0) Items The grammar Items S  A $ S  • A $ A  (A )

43 Key idea behind LR(0) items
If the “current state” contains the item A   • c  and the current symbol in the input buffer is c the state prompts parser to perform a shift action next state will contain A   c •  If the “state” contains the item A   • the state prompts parser to perform a reduce action If the “state” contains the item S   • $ and the input buffer is empty the state prompts parser to accept But How about A   • B  ? where B is a nonterminal?

44 The NFA for LR(0) items The transition of LR(0) items can be represented by an NFA, in which 1. each LR(0) item is a state, 2. there is a transition from item A a • X b to item A aX • b with label X, 3. there is an e-transition from item A a • B b to B  • g, 4. S  • A $ is the start state 5. A  a • is a final state.

45 Example NFA for Items LR(0) Items S  • A $ S  A • $ A  • (A)
A  ( • A ) A  (A • ) A  (A) • A  • ( ) A  (•) A  ( ) • A  ( A • ) A  ( • ) A  (A) • S  A • $ S  • A $ A  • ( ) A  ( ) • A  ( • A ) A  • (A ) A ( ) e

46 The DFA from LR(0) items After the NFA for LR(0) is constructed, the resulting DFA for LR(0) parsing can be obtained by the usual NFA2DFA construction. we thus require e-closure (I) move(S, a)

47 Closure() of a set of items
Closure finds all the items in the same “state” Fixed Point Algorithm for Closure(I) Every item in I is also an item in Closure(I) If A  • B  is in Closure(I) and B •  is an item, then add B •  to Closure(I) Repeat until no more new items can be added to Closure(I)

48 Example of Closure Closure({A ( • A )}) Items S  • A $ S  A • $

49 Another Example closure({S  • A $}) Items S • A $ S  • A $
A  • (A) A  ( • A ) A  (A • ) A  (A) • A  • ( ) A  ( • ) A  ( ) • S • A $ A  • (A) A  • ( )

50 Goto() of a set of items Goto finds the new state after consuming a grammar symbol while at the current state Algorithm for Goto(I, X) where I is a set of items and X is a grammar symbol Goto(I, X) = Closure( { A  X •  | A  • X  in I }) goto is the new set obtained by “moving the dot” over X

51 Example of Goto Goto ({A  ( • A)}, A) Items S  • A $ S  A • $

52 Example of Goto Goto ({A  •(A)}, () Items S  • A $ S  A • $

53 Building the DFA states
Essentially the usual NFA2DFA construction!! Let A be the start symbol and S a new start symbol. Create a new rule S  A $ Create the first state to be Closure({ S  • A $}) Pick a state I for each item A  • X  in I find Goto(I, X) if Goto(I, X) is not already a state, make one Add an edge X from state I to Goto(I, X) state Repeat until no more additions possible

54 DFA Example A ( ) s0 s1 s2 s3 s5 s4 S  • A$ A  • (A) A  • ( )

55 Constructing a LR(0) Parse Engine
Build a DFA DONE Construct a parse table using the DFA

56 Creating the parse tables
For each state Transition to another state using a terminal symbol is a shift to that state (shift to sn) Transition to another state using a non-terminal is a goto that state (goto sn) If there is an item A   • in the state do a reduction with that production for all terminals (reduce k)

57 Building Parse Table Example
S  A • $ s0 s2 S  • A$ A  • (A) A  • ( ) ( A  ( • A) A  ( • ) A  • (A) A  • ( ) ( s3 A  (A • ) A ) ) s5 s4 A  ( ) • A  (A) •

58 Problem With LR(0) Parsing
No lookahead Vulnerable to unnecessary conflicts Shift/Reduce Conflicts (may reduce too soon in some cases) Reduce/Reduce Conflicts Solutions: SLR(1) parsing - reduce only when next symbol can occur after nonterminal from production LR(1) parsing - systematic lookahead

59 SLR(1) Parsing S  A $ A  a A  a b If a state contains A  •
Reduce by A  only if next input symbol can follow A in some derivation Example Grammar S  A $ A  a A  a b

60 LR(0) Parser s3 s0 s2 s1 A S  A • $ S  • A $ A  • a A  • a b

61 Creating SLR parse tables
For each state Transition to another state using a terminal symbol is a shift to that state (shift to sn) (same as LR(0)) Transition to another state using a non-terminal is a goto that state (goto sn) (same as LR(0)) If there is an item A   • in the state do a reduction with that production for all terminals that can follow A. T that may follow A in some derivation (more precise than LR(0))

62 Follow() sets For each non-terminal A, Follow(A) is the set of terminals that can come after A in some derivation

63 Constraints for Follow()
$  Follow(S), where S is the start symbol If A  B is a production then First()  Follow(B) If A  B is a production then Follow(A)  Follow(B) If A  B is a production and  derives  then Follow(A)  Follow(B)

64 Algorithm for Follow for all nonterminals NT Follow(NT) = {}
Follow(S) = {} while Follow sets keep changing for all productions A  B Follow(B) = Follow(B)  First() if ( derives ) Follow(B) = Follow(B)Follow(A) for all productions A  B Follow(B) = Follow(B)Follow(A)

65 Augmenting Example with Follow
Example Grammar for Follow S  A $ A  a A  a b Follow(S) = { $ } Follow(A) = { $ }

66 SLR Eliminates Shift/Reduce Conflict
bFollow(A) s0 A S  A • $ s2 S  • A $ A  • a A  • a b A  a b • s1 A  a • A  a • b b a

67 Basic Idea Behind LR(1) Split states in LR(0) DFA based on lookahead
Reduce based on item and lookahead

68 LR(1) Items A ® a²b, a An LR(1) item is a pair: A ! ab is a production
a is a terminal (the lookahead terminal) LR(1) means 1 lookahead terminal [A ® a²b, a] describes a context of the parser We are trying to find an A followed by an a, and We have a already on top of the stack Thus we need to see next a prefix derived from ba

69 Note The symbol I was used before to separate the stack from the rest of input a I g, where a is the stack and g is the remaining string of terminals In items ² is used to mark a prefix of a production rhs: A ® a²b, a Here b might contain non-terminals as well In both case the stack is on the left

70 Convention We add to our grammar a fresh new start symbol S and a production S ! E Where E is the old start symbol The initial parsing context contains: S ! ²E, $ Trying to find an S as a string derived from E$ The stack is empty

71 LR(1) Items (Cont.) E ! E + ² ( E ), + E ! E + (² E ), +
In context containing E ! E + ² ( E ), + If ( follows then we can perform a shift to context containing E ! E + (² E ), + E ! E + ( E ) ², + We can perform a reduction with E ! E + ( E ) But only if a + follows

72 LR(1) Items (Cont.) E ! E + (² E ) , + E ! ² int, )
Consider the item E ! E + (² E ) , + We expect a string derived from E ) + There are two productions for E E ! int and E ! E + ( E) We describe this by extending the context with two more items: E ! ² int, ) E ! ² E + ( E ) , )

73 The Closure Operation The operation of extending the context with items is called the closure operation Closure(Items) = repeat for each [A ! a ² B b, a] in Items for each production B ! g for each b 2 First(ba) add [B ! ² g, b] to Items until Items is unchanged

74 Constructing the Parsing DFA (1)
Construct the start context: Closure({S ! ²E, $}) S ! ²E, $ E ! ²E+(E), $ E ! ²int, $ E ! ²E+(E), + E ! ²int, + S ! ²E, $ E ! ²E+(E), $/+ E ! ²int, $/+ We abbreviate as:

75 Constructing the Parsing DFA (2)
A DFA state is a closed set of LR(1) items The start state contains [S ! ²E, $] A state that contains [A ! a², b] is labeled with “reduce with A ! a on b” And now the transitions …

76 The DFA Transitions A state “State” that contains [A ! a²yb, b] has a transition labeled y to a state that contains the items “Transition(State, y)” y can be a terminal or a non-terminal Transition(State, y) Items Ã Æ for each [A ! a²yb, b] 2 State add [A ! ay²b, b] to Items return Closure(Items)

77 Constructing the Parsing DFA. Example.
1 S ! ²E, $ E ! ²E+(E), $/+ E ! ²int, $/+ E ! int on $, + E ! int², $/+ int E ! E+² (E), $/+ 3 E + 2 ( S ! E², $ E ! E²+(E), $/+ E ! E+(²E), $/+ E ! ²E+(E), )/+ E ! ²int, )/+ 4 accept on $ E int E ! E+(E²), $/+ E ! E²+(E), )/+ 5 6 E ! int on ), + E ! int², )/+ and so on…

78 LR Parsing Tables. Notes
Parsing tables (i.e. the DFA) can be constructed automatically for a CFG But we still need to understand the construction to work with parser generators E.g., they report errors in terms of sets of items What kind of errors can we expect?

79 Shift/Reduce Conflicts
If a DFA state contains both [A ! a²ab, b] and [B ! g², a] Then on input “a” we could either Shift into state [A ! aa²b, b], or Reduce with B ! g This is called a shift-reduce conflict

80 Shift/Reduce Conflicts
Typically due to ambiguities in the grammar Classic example: the dangling else S ® if E then S | if E then S else S | OTHER Will have DFA state containing [S ® if E then S², else] [S ® if E then S² else S, x] If else follows then we can shift or reduce Default (bison, CUP, etc.) is to shift Default behavior is as needed in this case

81 More Shift/Reduce Conflicts
Consider the ambiguous grammar E ® E + E | E * E | int We will have the states containing [E ® E * ² E, +] [E ® E * E², +] [E ® ² E + E, +] ÞE [E ® E ² + E, +] … … Again we have a shift/reduce on input + We need to reduce (* binds more tightly than +) Recall solution: declare the precedence of * and +

82 More Shift/Reduce Conflicts
In bison declare precedence and associativity: %left + %left * In CUP: precedence left ADD SUB precedence left MULT DIV Precedence of a rule = that of its last terminal See CUP/bison manual for ways to override this default Resolve shift/reduce conflict with a shift if: no precedence declared for either rule or terminal input terminal has higher precedence than the rule the precedences are the same and right associative

83 Using Precedence to Solve S/R Conflicts
Back to our example: [E ® E * ² E, +] [E ®E * E², +] [E ® ² E + E, +] ÞE [E ®E ² + E, +] … … Will choose reduce because precedence of rule E ® E * E is higher than that of terminal +

84 Using Precedence to Solve S/R Conflicts
Same grammar as before E ® E + E | E * E | int We will also have the states [E ® E + ² E, *] [E ® E + E², *] [E ® ² E * E, *] ÞE [E ® E ² * E, *] … … Now we also have a shift/reduce on input * We choose shift because * has a higher precedence than the rule E ® E + E.

85 Using Precedence to Solve S/R Conflicts
the grammar E ® E + E | E - E | E * E | int We will also have the states [E ® E + ² E, -] [E ® E + E², -] [E ® ² E - E, -] ÞE [E ® E ² - E, -] … … Now we also have a shift/reduce on input - We choose reduce because E ® E + E and - have the same precedence and +/- is left-associative

86 Using Precedence to Solve S/R Conflicts
Back to our dangling else example [S ® if E then S², else] [S ® if E then S² else S, x] Can eliminate conflict by declaring else with higher precedence than then Or just rely on the default shift action But this starts to look like “hacking the parser” Best to avoid overuse of precedence declarations or you’ll end with unexpected parse trees

87 Reduce/Reduce Conflicts
If a DFA state contains both [A ! a², a] and [B ! b², a] Then on input “a” we don’t know which production to reduce This is called a reduce/reduce conflict

88 Reduce/Reduce Conflicts
Usually due to gross ambiguity in the grammar Example: a sequence of identifiers S ® e | id | id S There are two parse trees for the string id S ® id S ® id S ® id How does this confuse the parser?

89 More on Reduce/Reduce Conflicts
Consider the states [S ® id ², $] [S’ ® ² S, $] [S ® id ² S, $] [S ® ², $] Þid [S ® ², $] [S ® ² id, $] [S ® ² id, $] [S ® ² id S, $] [S ® ² id S, $] Reduce/reduce conflict on input $ S’ ® S ® id S’ ® S ® id S ® id Better rewrite the grammar: S ® e | id S

90 Using Parser Generators
Parser generators construct the parsing DFA given a CFG Use precedence declarations and default conventions to resolve conflicts The parser algorithm is the same for all grammars (and is provided as a library function) But most parser generators do not construct the DFA as described before Because the LR(1) parsing DFA has 1000s of states even for a simple language

91 LR(1) Parsing Tables are Big
But many states are similar, e.g. and Idea: merge the DFA states whose items differ only in the lookahead tokens We say that such states have the same core We obtain E ! int on $, + E ! int², $/+ E ! int², )/+ on ), + 5 1 E ! int on $, +, ) E ! int², $/+/) 1’

92 The Core of a Set of LR Items
Definition: The core of a set of LR items is the set of first components Without the lookahead terminals Example: the core of { [X ® a²b, b], [Y ® g²d, d]} is {X ® a²b, Y ® g²d}

93 LALR States Consider for example the LR(1) states
{[A ® a², a], [B ® b², c]} {[A ® a², b], [B ® b², d]} They have the same core and can be merged And the merged state contains: {[A ® a², a/b], [B ® b², c/d]} These are called LALR(1) states Stands for LookAhead LR Typically 10 times fewer LALR(1) states than LR(1)

94 A LALR(1) DFA Repeat until all states have distinct core
Choose two distinct states with same core Merge the states by creating a new one with the union of all the items Point edges from predecessors to new state New state points to all the previous successors A B C A C BE D E F D F

95 Conversion LR(1) to LALR(1). Example.
int accept on $ int E ! int on $, +, ) E ! E + (E) ( E 1,5 2 3,8 4,9 6,10 7,11 + ) 1 E E ! int on $, + ( + 2 3 4 accept on $ E int ) E ! int on ), + 7 6 5 E ! E + (E) on $, + + int ( 8 9 E + 10 11 E ! E + (E) on ), + )

96 The LALR Parser Can Have Conflicts
Consider for example the LR(1) states {[A ® a², a], [B ® b², b]} {[A ® a², b], [B ® b², c]} And the merged LALR(1) state {[A ® a², a/b], [B ® b², b/c]} Has a new reduce-reduce conflict on b In practice such cases are rare

97 LALR vs. LR Parsing LALR languages are not natural
They are an efficiency hack on LR languages Any reasonable programming language has a LALR(1) grammar LALR(1) has become a standard for programming languages and for parser generators

98 A Hierarchy of Grammar Classes
From Andrew Appel, “Modern Compiler Implementation in Java”

99 Notes on Parsing Parsing A solid foundation: context-free grammars
A simple parser: LL(1) A more powerful parser: LR(1) An efficiency hack: LALR(1) LALR(1) parser generators

100 Supplement to LR Parsing
Strange Reduce/Reduce Conflicts Due to LALR Conversion (from the bison manual)

101 Strange Reduce/Reduce Conflicts
Consider the grammar S ® P R , NL ® N | N , NL P ® T | NL : T R ® T | N : T N ® id T ® id P - parameters specification P is id | id+ : id R - result specification R is id | id : id N - a parameter or result name T - a type name NL - a list of names

102 Strange Reduce/Reduce Conflicts
In P an id is a N when followed by , or : T when followed by id In R an id is a N when followed by : T when followed by , Ex: idN1,idN2: idT3 idN4 :idT5, This is an LR(1) grammar. But it is not LALR(1). Why? For obscure reasons

103 A Few LR(1) States LALR reduce/reduce conflict on “,” P ® ² T id
P ® ² NL : T id NL ® ² N : NL ® ² N , NL : N ® ² id : N ® ² id , T ® ² id id 1 T ® id ² id N ® id ² : N ® id ² , id 3 LALR reduce/reduce conflict on “,” T ® id ² id/, N ® id ² :/, LALR merge R ® ² T , R ® ² N : T , T ® ² id , N ® ² id : 2 T ® id ² , N ® id ² : id 4

104 What Happened? Two distinct states were confused because they have the same core Fix: add dummy productions to distinguish the two confused states. E.g., add R ® id bogus bogus is a terminal not used by the lexer This production will never be used during parsing But it distinguishes R from P

105 A Few LR(1) States After Fix
P ® ² T id P ® ² NL : T id NL ® ² N : NL ® ² N , NL : N ® ² id : N ® ² id , T ® ² id id 1 T ® id ² id N ® id ² : N ® id ² , 3 id Different cores Þ no LALR merging T ® id ² , N ® id ² : R ® id ² bogus , 4 R ® . T , R ® . N : T , R ® . id bogus , T ® . id , N ® . id : 2 id


Download ppt "Bottom Up Parsing."

Similar presentations


Ads by Google