Compiler Construction

Slides:



Advertisements
Similar presentations
Parsing V: Bottom-up Parsing
Advertisements

A question from last class: construct the predictive parsing table for this grammar: S->i E t S e S | i E t S | a E -> B.
YANGYANG 1 Chap 5 LL(1) Parsing LL(1) left-to-right scanning leftmost derivation 1-token lookahead parser generator: Parsing becomes the easiest! Modifying.
By Neng-Fa Zhou Syntax Analysis lexical analyzer syntax analyzer semantic analyzer source program tokens parse tree parser tree.
ISBN Chapter 4 Lexical and Syntax Analysis The Parsing Problem Recursive-Descent Parsing.
1 Predictive parsing Recall the main idea of top-down parsing: Start at the root, grow towards leaves Pick a production and try to match input May need.
Parsing — Part II (Ambiguity, Top-down parsing, Left-recursion Removal)
Prof. Fateman CS 164 Lecture 91 Bottom-Up Parsing Lecture 9.
1 The Parser Its job: –Check and verify syntax based on specified syntax rules –Report errors –Build IR Good news –the process can be automated.
1 Chapter 4: Top-Down Parsing. 2 Objectives of Top-Down Parsing an attempt to find a leftmost derivation for an input string. an attempt to construct.
Top-Down Parsing.
1 CIS 461 Compiler Design & Construction Fall 2012 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #12 Parsing 4.
– 1 – CSCE 531 Spring 2006 Lecture 7 Predictive Parsing Topics Review Top Down Parsing First Follow LL (1) Table construction Readings: 4.4 Homework: Program.
Lexical and syntax analysis
CSC3315 (Spring 2009)1 CSC 3315 Lexical and Syntax Analysis Hamid Harroud School of Science and Engineering, Akhawayn University
Parsing IV Bottom-up Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Syntax and Semantics Structure of programming languages.
Chapter 9 Syntax Analysis Winter 2007 SEG2101 Chapter 9.
1 Chapter 5 LL (1) Grammars and Parsers. 2 Naming of parsing techniques The way to parse token sequence L: Leftmost R: Righmost Top-down  LL Bottom-up.
Chapter 5 Top-Down Parsing.
BİL 744 Derleyici Gerçekleştirimi (Compiler Design)1 Syntax Analyzer Syntax Analyzer creates the syntactic structure of the given source program. This.
Profs. Necula CS 164 Lecture Top-Down Parsing ICOM 4036 Lecture 5.
1 Compiler Construction Syntax Analysis Top-down parsing.
Review 1.Lexical Analysis 2.Syntax Analysis 3.Semantic Analysis 4.Code Generation 5.Code Optimization.
CMSC 331, Some material © 1998 by Addison Wesley Longman, Inc. 1 Chapter 4 Chapter 4 Bottom Up Parsing.
Syntactic Analysis Natawut Nupairoj, Ph.D. Department of Computer Engineering Chulalongkorn University.
Syntax and Semantics Structure of programming languages.
COP4020 Programming Languages Parsing Prof. Xin Yuan.
1 Compiler Construction Syntax Analysis Top-down parsing.
Top-down Parsing lecture slides from C OMP 412 Rice University Houston, Texas, Fall 2001.
Top-Down Parsing CS 671 January 29, CS 671 – Spring Where Are We? Source code: if (b==0) a = “Hi”; Token Stream: if (b == 0) a = “Hi”; Abstract.
1 Context free grammars  Terminals  Nonterminals  Start symbol  productions E --> E + T E --> E – T E --> T T --> T * F T --> T / F T --> F F --> (F)
1 Nonrecursive Predictive Parsing  It is possible to build a nonrecursive predictive parser  This is done by maintaining an explicit stack.
Top-down Parsing. 2 Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production.
Top-Down Parsing.
CS 330 Programming Languages 09 / 25 / 2007 Instructor: Michael Eckmann.
COMP 3438 – Part II-Lecture 5 Syntax Analysis II Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ.
1 Topic #4: Syntactic Analysis (Parsing) CSC 338 – Compiler Design and implementation Dr. Mohamed Ben Othman ( )
COMP 3438 – Part II-Lecture 6 Syntax Analysis III Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ.
1 Chapter 6 Bottom-Up Parsing. 2 Bottom-up Parsing A bottom-up parsing corresponds to the construction of a parse tree for an input tokens beginning at.
Spring 16 CSCI 4430, A Milanova 1 Announcements HW1 due on Monday February 8 th Name and date your submission Submit electronically in Homework Server.
Syntax and Semantics Structure of programming languages.
Compiler Construction
Programming Languages Translator
CS510 Compiler Lecture 4.
Bottom-Up Parsing.
Lexical and Syntax Analysis
Unit-3 Bottom-Up-Parsing.
UNIT - 3 SYNTAX ANALYSIS - II
Parsing IV Bottom-up Parsing
Table-driven parsing Parsing performed by a finite state machine.
Parsing — Part II (Top-down parsing, left-recursion removal)
Introduction to Top Down Parser
Top-down parsing cannot be performed on left recursive grammars.
4 (c) parsing.
Subject Name:COMPILER DESIGN Subject Code:10CS63
Lexical and Syntax Analysis
Top-Down Parsing CS 671 January 29, 2008.
Lecture 7 Predictive Parsing
4d Bottom Up Parsing.
Compilers Principles, Techniques, & Tools Taught by Jing Zhang
Top-Down Parsing Identify a leftmost derivation for an input string
Top-Down Parsing The parse tree is created top to bottom.
Lecture 7: Introduction to Parsing (Syntax Analysis)
Bottom Up Parsing.
Parsing IV Bottom-up Parsing
Syntax Analysis - Parsing
Lecture 7 Predictive Parsing
Compilers Principles, Techniques, & Tools Taught by Jing Zhang
Parsing CSCI 432 Computer Science Theory
Presentation transcript:

Compiler Construction A Compulsory Module for Students in Computer Science Department Faculty of IT / Al – Al Bayt University Second Semester 2010/2011

Syntax Analyzer

Top-down parsing can be viewed as the problem of constructing a parse tree for the input string, starting from the root and creating the nodes of the parse tree in preorder (depth-first) Equivalently, top-down parsing can be viewed as finding a leftmost derivation for an input string.

Top-down Parsing (cont.) At each step of a top-down parse, the key problem is that of determining the production to be applied for a nonterminal, say A. Once an A-production is chosen, the rest of the parsing process consists of "matching” the terminal symbols in the production body with the input string. A general form of top-down parsing, called recursive descent parsing (may require backtracking to find the correct A-production to be applied). a special case of recursive-descent parsing is Predictive parsing (where no backtracking is required. Another form of top-down parsing is called Nonrecursive descent parsing (maintain a stack explicitly in addition to using parse table)

Recursive-Descent Parsing Algorithm A recursive-descent parsing program consists of a set of procedures, one for each nonterminal. Execution begins with the procedure for the start symbol, Recursive-descent may require backtracking; that is, it may require repeated scans over the input (backtracking is not very efficient)

Recursive-Descent Parsing (cont.) To allow backtracking, the above algorithm needs to be modified as follows: cannot choose a unique A-production at line (1), so we must try each of several productions in some order. failure at line (7) is not ultimate failure, but suggests only that we need to return to line (1) and try another A-production. input error is found only if there are no more A-productions to try. In order to try another A-production, we need to be able to reset the input pointer to where it was when we first reached line (1). Thus, a local variable is needed to store this input pointer for future use.

Example: Consider the following grammar: To construct a parse tree top-down for the input string w = cad: We have a match for the first and the second input symbol, so we advance the input pointer to d and compare d against the next leaf, labeled b. Since b does not match d, so we must try the other production. But in this case we need to reset the pointer of the input string ad start parsing again using the new production. A left-recursive grammar can cause a recursive-descent parser, even one with backtracking, to go into an infinite loop.

Predictive Parsers A CFG may be parsed by a recursive-descent parser that needs no backtracking if left recursions eliminated left factoring transformations applied Building a predictive parser using recursive procedures by : Creating transition diagrams Match terminals, and making procedure call for non-terminals.

Consider the following grammar After left recursion elimination and left factoring 6 4 3 T +  E': Transition diagrams can be simplified, provided the sequence of grammar symbols along paths is preserved. The diagrams in Fig. above are equivalent: if we trace paths from E to an accepting state and substitute for E', then, in both sets of diagrams, the grammar symbols along the paths make up strings of the form T + T + . . . + T.

Transition Diagrams for Predictive Parsers To construct the transition diagram from a grammar, first eliminate left recursion and then left factor the grammar. for each non-terminal A, 1. Create an initial and final (return) state. 2. For each production A  XIX2 - . Xn, create a path from the initial to the final state, with edges labeled X1, X2,. . . , Xn. If A  , the path is an edge labeled . Transition diagrams for predictive parsers differ from those for lexical analyzers. Parsers have one diagram for each non-terminal. The labels of edges can be tokens or non-terminals. A transition on a token (terminal) means that we take that transition if that token is the next input symbol. A transition on a non-terminal A is a call of the procedure for A. we used tail-recursion removal and substitution of procedure bodies to optimize the procedure for a non-terminal.

Non-recursive Predictive Parsing (driven predictive parsing) A non-recursive predictive parser can be built by maintaining a stack explicitly, rather than implicitly via recursive calls. The parser mimics a leftmost derivation. If w is the input that has been matched so far, then the stack holds a sequence of grammar symbols : a + b $ X Y Z $ Input buffer stack Predictive parsing program/driver Parsing Table M such that the table-driven parser has an input buffer, a stack containing a sequence of grammar symbols, a parsing table constructed by Algorithm 4.31, and an output stream. The input buffer contains the string to be parsed, followed by the end marker $

Nonrecursive Predictive Parsing Method: Initially, the parser is in a configuration with w$ in the input buffer and the start symbol S of G on top of the stack, above $. The algorithm below uses the predictive parsing table M to produce a predictive parse for the input

Predictive Parsing table for the grammar below:

FIRST and FOLLOW FIRST ( A ) = set of terminals that begin the strings derived from a. If A  , then  is also in FIRST( a ). FOLLOW ( A ) = set of terminals a that can appear immediately to the right of A in some sentential form. In other words, there exists a derivation S a Aab. In addition, if A can be the rightmost symbol in some sentential form, then $ is in FOLLOW(A); recall that $ is a special "endmarker" symbol that is assumed not to be a symbol of any grammar.

Computing FIRST FIRST (X) IF X is a terminal, FIRST(X) = {X} IF X  is a production, then add  to FIRST (X) IF X  Y1Y2…Yk is a production, Place a in FIRST(X) if a in FIRST(Yi) and  is in all of FIRST(Y1),…FIRST(Yi-1) Add  to FIRST(X) if  is in all FIRST(Yi).

Computing FOLLOW Place $ in FOLLOW(S), S is the start symbol, and $ is the endmarker. If there is a production A  aB, then everything in FIRST( ) except for , is placed in FOLLOW(B). If there is a production AaB, or a production AaB, where FIRST( ) contains , then everything in FOLLOW(A) is in FOLLOW(B).

Example: computing First() stmt  expr ; expr  term expr1 |  expr1  + term expr1 |  term  factor term1 term1  * factor term1 |  factor  ( expr ) | number Easy ones: First (expr1) = {+, e} First(term1) = {*, e} First(factor) = {(, number} Next step: First(term) = First(factor) First(expr) = First(term) and e First(stmt) = First(expr ;) Due to expre prod Add “;” to First(stmt)

Exercise: computing First and Follow Given the following Grammar 1 E  TE’ 2 E’  +TE’ | -TE’ |  3 T  FT’ 4 T’  *FT’ | /FT’ |  5 F  ( E ) | id | num Compute FIRST and FOLLOW for all non-terminals FIRST(E) = {(, id,num} FIRST(E’) = {+,-,e} FIRST(T) = {(,id,num} FIRST(T’) = {*,/,e} FIRST(F) = {(, id,num} Follow(E) = {),$} Follow(E’) = {),$} Follow(T) = {+,-,),$} Follow(T’) = {+,-,),$} Follow(F) = {*,/,+,-,),$}

Construction of a predictive parsing table

Notes: For some grammars, however, M may have some entries that are multiply defined. For example: if G is left-recursive or ambiguous, then M[A,d] will have at least one multiply defined entry. Although left-recursion elimination and left factoring are easy to do, there are some grammars for which no amount of alteration will produce an LL(1) grammar. The language in the following example (which abstracts the dangling-else) problem has no LL(1) grammar at all.

Cont. The parsing table for this grammar appears below. The entry for M[S' ,e] contains both S'  eS and S' . The grammar is ambiguous and the ambiguity is manifested by a choice in what production to use when an e (else) is seen. We can resolve this ambiguity

LL(1) Grammar: scanning the input from left to right (L), producing a leftmost derivation (L), using one input symbol of lookahead at each step to make parsing action decisions (1). A Grammar whose parsing table has no multiple-defined entries is called LL(1). The class of LL(1) grammars is rich enough to cover most programming constructs, although care is needed in writing a suitable grammar for the source language Properties of LL(1) Parsers A correct, leftmost parse tree is guaranteed (perform left recursion elimination and left factoring) All grammars in the LL(1) class should be unambiguous All LL(1) parsers operate in linear time, and at most, linear space.

LL(1) grammar properties (cont.)

Exercise 1 A  |  Which two of the following cases may cause the grammar to be NOT LL(1) a in FIRST() and b in FIRST() a and b are both in FIRST() a in both FIRST() and FIRST() a in FIRST() and FOLLOW(A),  in FIRST() 3 and 4

Exercise 2 stmt  if expr then stmt else stmt | if expr then stmt S  iEtS | iEtSeS | a E  b After left factoring, the new CFG is S  iEtSS’ | a S’ eS | e Why is this CFG not LL(1)?

S  iEtSS’ | a S’ eS | e E  b Where is the conflict? Exercise 2 FIRST(S) = FIRST(S’) = FOLLOW(S) = FOLLOW(S’) = Where is the conflict? {i, a} {e, e} {e,$} {e,$} S’eS and S’e are both entered for M[S’,e] entry

Show the following CFG is not LL(1) Exercise 3 Show the following CFG is not LL(1) stmt  label unlabeled_stmt label  id : | e unlabeled_stmt  id := expr id is in both First(label) and Follow(label) This means both label id: and label  e will be inserted into parser table entry M[label,id]

Bottom-Up Parsing A bottom-up parse corresponds to the construction of a parse tree for an input string beginning at the leaves (the bottom) and working up towards the root (the top). It is convenient to describe parsing as the process of building parse trees. Bottom-up parsing during a left-to-right scan of the input constructs a rightmost derivation in reverse.

We can think of bottom-up parsing as the process of "reducing" a string w to the start symbol of the grammar. At each reduction step, a specific substring matching the body of a production is replaced by the nonterminal at the head of that production. a reduction is the reverse of a step in a derivation (recall that in a derivation, a nonterminal in a sentential form is replaced by the body of one of its productions). The goal of bottom-up parsing is therefore to construct a derivation in reverse.

Bottom up parsing algorithms are: Shift-reduce parsers LR grammars, needs too much work to be build by hand, tools called automatic parser generators make it easy to construct efficient LR parsers from suitable grammars. The following derivation corresponds to the parse id*id This derivation is in fact a rightmost derivation.

Handler Informally, a "handle" is a substring that matches the body of a production, and whose reduction represents one step along the reverse of a rightmost derivation.

Shift-Reduce Parsing Shift-reduce parsing is a form of bottom-up parsing in which a stack holds grammar symbols and an input buffer holds the rest of the string to be parsed. As we shall see, the handle always appears at the top of the stack just before it is identified as the handle. $ used to mark the bottom of the stack and also the right end of the input.

The use of a stack in shift-reduce parsing is justified by an important fact: the handle will always eventually appear on top of the stack, never inside. While the primary operations are shift and reduce, there are actually four possible actions a shift-reduce parser can make: 1. Shift. Shift the next input symbol onto the top of the stack. 2. Reduce. The right end of the string to be reduced must be at the top of the stack. Locate the left end of the string within the stack and decide with what nonterminal to replace the string. 3. Accept. Announce successful completion of parsing. 4. Error. Discover a syntax error and call an error recovery routine.

Problem of shift reduce Conflicts During Shift-Reduce Parsing Every shift-reduce parser for such a grammar can reach a configuration in which the parser, knowing the entire stack contents and the next input symbol, cannot decide whether to shift or to reduce (a shift/reduce conflict), or cannot decide which of several reductions to make (a reduce/reduce conflict)

LR Parsing: Simple LR The most prevalent type of bottom-up parser today is based on a concept called LR(k) parsing; the "L" is for left-to-right scanning of the input, the "R" for constructing a rightmost derivation in reverse, and the k for the number of input symbols of lookahead that are used in making parsing decisions. The cases k = 0 or k = 1 are of practical interest, and we shall only consider LR parsers with k <=1 here. When (k) is omitted, k is assumed to be 1.

LR parsing is attractive for a variety of reasons: LR parsers can be constructed to recognize virtually all programminglanguage constructs for which context-free grammars can be written. Non- LR context-free grammars exist, but these can generally be avoided for typical programming-language constructs. The LR-parsing method is the most general nonbacktracking shift-reduce parsing method known An LR parser can detect a syntactic error as soon as it is possible to do so on a left-to-right scan of the input. The class of grammars that can be parsed using LR methods is a proper superset of the class of grammars that can be parsed with predictive or LL methods. For a grammar to be LR(k), we must be able to recognize the occurrence of the right side of a production in a right-sentential form, with k input symbols of lookahead. This requirement is far less stringent than that for LL(k) grammars where we must be able to recognize the use of a production seeing only the first k symbols of what its right side derives. Thus, LR grammars can describe more languages than LL grammars.

The principal drawback of the LR method is that it is too much work to construct an LR parser by hand for a typical programming-language grammar. A specialized tool, an LR parser generator, is needed. Fortunately, many such generators are available, one of the most commonly used ones, Yacc. Such a generator takes a context-free grammar and automatically produces a parser for that grammar. If the grammar contains ambiguities or other constructs that are difficult to parse in a left-to-right scan of the input, then the parser generator locates these constructs and provides detailed diagnostic messages.

Items and the LR(0) Automaton An LR parser makes shift-reduce decisions by maintaining states to keep track of where we are in a parse. An LR(0) item of a grammar G is a production of G with a dot at some position of the body Example :the production yields One collection of sets of LR(0) items, called the canonical LR(0) collection, provides the basis for constructing a deterministic finite automaton that is used ‘ to make parsing decisions. called an LR(0) automata

augmented grammar If G is a grammar with start symbol S, then G', the augmented grammar for G, is G with a new start symbol St and production S'  S. The purpose of this new starting production is to indicate to the parser when it should stop parsing and announce acceptance of the input. That is, acceptance occurs when and only when the parser is about to reduce by S'  S

Closure of Item Sets

The Function GOT0

Example For the grammar : then CLOSURE(I) contains the set of items I0. in Fig. 4.31. represent the canonical collection set of LR(0)

the sets of items of interest into two classes could be divided : 1. Kernel items: the initial item, S'  .S, and all items whose dots are not at the left end. 2. Non-kernel items: all items with their dots at the left end, except for S' .S.

The LR-Parsing Algorithm A schematic of an LR parser consists of an input, an output, a stack, a driver program, and a parsing table that has two pasts (ACTIONa nd GOTO).

Structure of the LR Parsing Table

SLR-parsing algorithm. (simple LR)

Example: parse id * id + id, using the following grammar

0id5 0F3

See page 254 of the book

Canonical LR(1) parser solve such problem since it provide 1 look-ahead symbol

Error Recovery Selection of a synchronizing set Place all symbols in FOLLOW(A) into the synchronizing set for non-terminal A. Add keywords that begin statements to the set Add symbols in FIRST(A) May re-parse A rather than pop A If a non-terminal can generate e, then the production can be used as a default. Pop and continue parsing