Scanner wrap-up and Introduction to Parser. Automating Scanner Construction RE  NFA (Thompson’s construction) Build an NFA for each term Combine them.

Slides:



Advertisements
Similar presentations
Parsing II : Top-down Parsing
Advertisements

Lexical Analysis IV : NFA to DFA DFA Minimization
CPSC Compiler Tutorial 4 Midterm Review. Deterministic Finite Automata (DFA) Q: finite set of states Σ: finite set of “letters” (input alphabet)
From Cooper & Torczon1 The Front End The purpose of the front end is to deal with the input language Perform a membership test: code  source language?
1 CIS 461 Compiler Design and Construction Fall 2012 slides derived from Tevfik Bultan et al. Lecture-Module 5 More Lexical Analysis.
Lexical Analysis: DFA Minimization & Wrap Up Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp.
6/12/2015Prof. Hilfinger CS164 Lecture 111 Bottom-Up Parsing Lecture (From slides by G. Necula & R. Bodik)
From Cooper & Torczon1 Automating Scanner Construction RE  NFA ( Thompson’s construction )  Build an NFA for each term Combine them with  -moves NFA.
1 CMPSC 160 Translation of Programming Languages Fall 2002 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #4 Lexical.
Parsing VI The LR(1) Table Construction. LR(k) items The LR(1) table construction algorithm uses LR(1) items to represent valid configurations of an LR(1)
Parsing III (Eliminating left recursion, recursive descent parsing)
ISBN Chapter 4 Lexical and Syntax Analysis The Parsing Problem Recursive-Descent Parsing.
Parsing V Introduction to LR(1) Parsers. from Cooper & Torczon2 LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited.
Parsing — Part II (Ambiguity, Top-down parsing, Left-recursion Removal)
1 CMPSC 160 Translation of Programming Languages Fall 2002 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #5 Introduction.
COP4020 Programming Languages
1 CIS 461 Compiler Design & Construction Fall 2012 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #12 Parsing 4.
ParsingParsing. 2 Front-End: Parser  Checks the stream of words and their parts of speech for grammatical correctness scannerparser source code tokens.
Parsing II : Top-down Parsing Lecture 7 CS 4318/5331 Apan Qasem Texas State University Spring 2015 *some slides adopted from Cooper and Torczon.
1 Chapter 3 Context-Free Grammars and Parsing. 2 Parsing: Syntax Analysis decides which part of the incoming token stream should be grouped together.
CSE 413 Programming Languages & Implementation Hal Perkins Autumn 2012 Context-Free Grammars and Parsing 1.
Compiler Construction Parsing Part I
EECS 6083 Intro to Parsing Context Free Grammars
8/19/2015© Hal Perkins & UW CSEC-1 CSE P 501 – Compilers Parsing & Context-Free Grammars Hal Perkins Winter 2008.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 7 Mälardalen University 2010.
1 Introduction to Parsing Lecture 5. 2 Outline Regular languages revisited Parser overview Context-free grammars (CFG’s) Derivations.
Parsing IV Bottom-up Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
LR(1) Parsers Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit.
Lexical Analysis — Part II: Constructing a Scanner from Regular Expressions Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
1 Compiler Construction Finite-state automata. 2 Today’s Goals More on lexical analysis: cycle of construction RE → NFA NFA → DFA DFA → Minimal DFA DFA.
BİL 744 Derleyici Gerçekleştirimi (Compiler Design)1 Syntax Analyzer Syntax Analyzer creates the syntactic structure of the given source program. This.
Lexical Analysis — Part II: Constructing a Scanner from Regular Expressions.
Lexical Analysis - An Introduction. The Front End The purpose of the front end is to deal with the input language Perform a membership test: code  source.
Lexical Analysis - An Introduction Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at.
어휘분석 (Lexical Analysis). Overview Main task: to read input characters and group them into “ tokens. ” Secondary tasks: –Skip comments and whitespace;
Lexical Analysis - An Introduction Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at.
Introduction to Parsing Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Introduction to Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Introduction to Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Lesson 3 CDT301 – Compiler Theory, Spring 2011 Teacher: Linus Källberg.
Lexical Analysis Constructing a Scanner from Regular Expressions.
Lexical Analysis III : NFA to DFA DFA Minimization Lecture 5 CS 4318/5331 Spring 2010 Apan Qasem Texas State University *some slides adopted from Cooper.
Bernd Fischer RW713: Compiler and Software Language Engineering.
Lexical Analysis: DFA Minimization Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice.
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 3: Introduction to Syntactic Analysis.
Lexical Analysis: DFA Minimization & Wrap Up. Automating Scanner Construction PREVIOUSLY RE  NFA ( Thompson’s construction ) Build an NFA for each term.
Top-down Parsing Recursive Descent & LL(1) Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412.
Overview of Previous Lesson(s) Over View  In our compiler model, the parser obtains a string of tokens from the lexical analyzer & verifies that the.
Top-down Parsing. 2 Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production.
Syntax Analyzer (Parser)
LECTURE 4 Syntax. SPECIFYING SYNTAX Programming languages must be very well defined – there’s no room for ambiguity. Language designers must use formal.
1 Topic #4: Syntactic Analysis (Parsing) CSC 338 – Compiler Design and implementation Dr. Mohamed Ben Othman ( )
Parsing III (Top-down parsing: recursive descent & LL(1) )
Parsing V LR(1) Parsers. LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle recognition.
Spring 16 CSCI 4430, A Milanova 1 Announcements HW1 will be out this evening Due Monday, 2/8 Submit in HW Server AND at start of class on 2/8 A review.
Introduction to Parsing
Parsing & Context-Free Grammars
Syntax Specification and Analysis
Parsing & Context-Free Grammars Hal Perkins Autumn 2011
Lexical and Syntax Analysis
Lexical Analysis - An Introduction
Lecture 5: Lexical Analysis III: The final bits
Lecture 7: Introduction to Parsing (Syntax Analysis)
R.Rajkumar Asst.Professor CSE
Introduction to Parsing
Automating Scanner Construction
Introduction to Parsing
Lexical Analysis: DFA Minimization & Wrap Up
Lexical Analysis - An Introduction
Presentation transcript:

Scanner wrap-up and Introduction to Parser

Automating Scanner Construction RE  NFA (Thompson’s construction) Build an NFA for each term Combine them with  -moves NFA  DFA (subset construction) Build the simulation DFA  Minimal DFA (today) Hopcroft’s algorithm DFA  RE (not really part of scanner construction) All pairs, all paths problem Union together paths from s 0 to a final state

DFA Minimization The Big Picture Discover sets of equivalent states Represent each such set with just one state

DFA Minimization The Big Picture Discover sets of equivalent states Represent each such set with just one state Two states are equivalent if and only if: The set of paths leading to them are equivalent    , transitions on  lead to equivalent states (DFA)  -transitions to distinct sets  states must be in distinct sets A partition P of S Each s  S is in exactly one set p i  P The algorithm iteratively partitions the DFA’s states

DFA Minimization Details of the algorithm Group states into maximal size sets, optimistically Iteratively subdivide those sets, as needed States that remain grouped together are equivalent Initial partition, P 0, has two sets: {F} & {Q-F} (D =(Q, , ,q 0,F)) Splitting a set (“partitioning a set by a”) Assume q a, & q b  s, and  (q a,a) = q x, &  (q b,a) = q y If q x & q y are not in the same set, then s must be split q a has transition on a, q b does not  a splits s One state in the final DFA cannot have two transitions on a

DFA Minimization The algorithm P  { F, {Q-F}} while ( P is still changing) T  { } for each set S  P for each    partition S by  into S 1, and S 2 T  T  S 1  S 2 if T  P then P  T Why does this work? Partition P  2 Q Start off with 2 subsets of Q {F} and {Q-F} While loop takes P i  P i+1 by splitting 1 or more sets P i+1 is at least one step closer to the partition with |Q| sets Maximum of |Q | splits Note that Partitions are never combined Initial partition ensures that final states are intact This is a fixed-point algorithm!

Key Idea: Splitting S around  S T R  The algorithm partitions S around  Original set S  Q  S has transitions on  to R, Q, & T

Key Idea: Splitting S around  T R  Original set S  Q  S1S1 S2S2 Could we split S 2 further? Yes, but it does not help asymptotically S 2 is everything in S - S 1

DFA Minimization Refining the algorithm As written, it examines every S  P on each iteration This does a lot of unnecessary work Only need to examine S if some T, reachable from S, has split Reformulate the algorithm using a “worklist” Start worklist with initial partition, F and {Q-F} When it splits S into S 1 and S 2, place S 2 on worklist This version looks at each S  P many fewer times  Well-known, widely used algorithm due to John Hopcroft

Hopcroft's Algorithm W  {F, Q-F}; P  {F, Q-F}; // W is the worklist, P the current partition while ( W is not empty ) do begin select and remove S from W; // S is a set of states for each  in  do begin let I     –1 ( S ); // I  is set of all states that can reach S on  for each R in P such that R  I  is not empty and R is not contained in I  do begin partition R into R 1 and R 2 such that R 1  R  I  ; R 2  R – R 1 ; replace R in P with R 1 and R 2 ; if R  W then replace R with R 1 in W and add R 2 to W ; else if || R 1 || ≤ || R 2 || then add add R 1 to W ; else add R 2 to W ; end

A Detailed Example Remember ( a | b ) * abb ? Applying the subset construction: Iteration 3 adds nothing to S, so the algorithm halts a | b q0q0 q1q1 q4q4 q2q2 q3q3  abb contains q 4 (final state) Our first NFA

A Detailed Example The DFA for ( a | b ) * abb Not much bigger than the original All transitions are deterministic Use same code skeleton as before s0s0 a s1s1 b s3s3 b s4s4 s2s2 a b b a a a b

A Detailed Example Applying the minimization algorithm to the DFA s0s0 a s1s1 b s3s3 b s4s4 s2s2 a b b a a a b s 0, s 2 a s1s1 b s3s3 b s4s4 b a a a b final state

DFA Minimization What about a ( b | c ) * ? First, the subset construction: q0q0 q1q1 a  q4q4 q5q5 b q6q6 q7q7 c q3q3 q8q8 q2q2 q9q9      s3s3 s2s2 s0s0 s1s1 c b a b b c c Final states

DFA Minimization Then, apply the minimization algorithm To produce the minimal DFA s3s3 s2s2 s0s0 s1s1 c b a b b c c s0s0 s1s1 a b | c Minimizing that DFA produces the one that a human would design! final states

Limits of Regular Languages Advantages of Regular Expressions Simple & powerful notation for specifying patterns Automatic construction of fast recognizers Many kinds of syntax can be specified with REs Example — an expression grammar Term  [a-zA-Z] ([a-zA-z] | [0-9]) * Op  + | - |  | / Expr  ( Term Op ) * Term Of course, this would generate a DFA … If REs are so useful … Why not use them for everything?

Limits of Regular Languages Not all languages are regular RL’s  CFL’s  CSL’s You cannot construct DFA’s to recognize these languages L = { p k q k } (parenthesis languages) L = { wcw r | w   * } Neither of these is a regular language (nor an RE) But, this is a little subtle. You can construct DFA’s for Strings with alternating 0’s and 1’s (  | 1 ) ( 01 ) * (  | 0 ) Strings with and even number of 0’s and 1’s RE’s can count bounded sets and bounded differences

What can be so hard? Poor language design can complicate scanning Reserved words are important if then then then = else; else else = then (PL/I) Insignificant blanks (Fortran & Algol68) do 10 i = 1,25 do 10 i = 1.25 String constants with special characters (C, C++, Java, …) newline, tab, quote, comment delimiters, … Finite closures (Fortran 66 & Basic) Limited identifier length Adds states to count length

What can be so hard? (Fortran 66/77) How does a compiler scan this? First pass finds & inserts blanks Can add extra words or tags to create a scanable language Second pass is normal scanner Example due to Dr. F.K. Zadeck

Building Faster Scanners from the DFA Table-driven recognizers waste effort Read (& classify) the next character Find the next state Assign to the state variable Trip through case logic in action() Branch back to the top We can do better Encode state & actions in the code Do transition tests locally Generate ugly, spaghetti-like code Takes (many) fewer operations per input character char  next character; state  s 0 ; call action(state,char); while (char  eof) state   (state,char); call action(state,char); char  next character; if  (state) = final then report acceptance; else report failure;

Building Faster Scanners from the DFA A direct-coded recognizer for r Digit Digit * Many fewer operations per character Almost no memory operations Even faster with careful use of fall-through cases goto s 0 ; s 0 : word  Ø ; char  next character; if (char = ‘r’) then goto s 1 ; else goto s e ; s 1 : word  word + char; char  next character; if (‘0’ ≤ char ≤ ‘9’) then goto s 2 ; else goto s e ; s2: word  word + char; char  next character; if (‘0’ ≤ char ≤ ‘9’) then goto s 2 ; else if (char = eof) then report success; else goto s e ; s e : print error message; return failure;

Building Faster Scanners Hashing keywords versus encoding them directly Some (well-known) compilers recognize keywords as identifiers and check them in a hash table Encoding keywords in the DFA is a better idea O(1) cost per transition Avoids hash lookup on each identifier It is hard to beat a well-implemented DFA scanner

Building Scanners The point All this technology lets us automate scanner construction Implementer writes down the regular expressions Scanner generator builds NFA, DFA, minimal DFA, and then writes out the (table-driven or direct-coded) code This reliably produces fast, robust scanners For most modern language features, this works You should think twice before introducing a feature that defeats a DFA-based scanner The ones we’ve seen (e.g., insignificant blanks, non-reserved keywords) have not proven particularly useful or long lasting

Some Points of Disagreement with EAC Table-driven scanners are not fast EaC doesn’t say they are slow; it says you can do better Faster code can be generated by embedding scanner in code This was shown for both LR-style parsers and for scanners in the 1980s Hashed lookup of keywords is slow EaC doesn’t say it is slow. It says that the effort can be folded into the scanner so that it has no extra cost. Compilers like GCC use hash lookup. A word must fail in the lookup to be classified as an identifier. With collisions in the table, this can add up. At any rate, the cost is unneeded, since the DFA can do it for O(1) cost per character.

Building Faster Scanners from the DFA Table-driven recognizers waste a lot of effort Read (& classify) the next character Find the next state Assign to the state variable Trip through case logic in action() Branch back to the top We can do better Encode state & actions in the code Do transition tests locally Generate ugly, spaghetti-like code Takes (many) fewer operations per input character char  next character; state  s 0 ; call action(state,char); while (char  eof) state   (state,char); call action(state,char); char  next character; if  (state) = final then report acceptance; else report failure; index

Parsing

The Front End Parser Checks the stream of words and their parts of speech (produced by the scanner) for grammatical correctness Determines if the input is syntactically well formed Guides checking at deeper levels than syntax Builds an IR representation of the code Think of this as the mathematics of diagramming sentences Source code Scanner IR Parser Errors tokens

The Study of Parsing The process of discovering a derivation for some sentence Need a mathematical model of syntax — a grammar G Need an algorithm for testing membership in L(G) Need to keep in mind that our goal is building parsers, not studying the mathematics of arbitrary languages Roadmap 1 Context-free grammars and derivations 2 Top-down parsing Hand-coded recursive descent parsers 3 Bottom-up parsing Generated LR(1) parsers

Specifying Syntax with a Grammar Context-free syntax is specified with a context-free grammar SheepNoise  SheepNoise baa | baa This CFG defines the set of noises sheep normally make It is written in a variant of Backus–Naur form Formally, a grammar is a four tuple, G = (S,N,T,P) S is the start symbol (set of strings in L(G)) N is a set of non-terminal symbols (syntactic variables) T is a set of terminal symbols (words) P is a set of productions or rewrite rules (P : N  (N  T) + )

Deriving Syntax We can use the SheepNoise grammar to create sentences use the productions as rewriting rules And so on...

A More Useful Grammar To explore the uses of CFGs,we need a more complex grammar Such a sequence of rewrites is called a derivation Process of discovering a derivation is called parsing We denote this derivation: Expr  * id – num * id

Derivations At each step, we choose a non-terminal to replace Different choices can lead to different derivations Two derivations are of interest Leftmost derivation — replace leftmost NT at each step Rightmost derivation — replace rightmost NT at each step These are the two systematic derivations (We don’t care about randomly-ordered derivations!) The example on the preceding slide was a leftmost derivation Of course, there is also a rightmost derivation Interestingly, it turns out to be different

The Two Derivations for x – 2 * y In both cases, Expr  * id – num * id The two derivations produce different parse trees The parse trees imply different evaluation orders! Leftmost derivationRightmost derivation

Derivations and Parse Trees Leftmost derivation G x E EOp – 2 E E E y * This evaluates as x – ( 2 * y )

Derivations and Parse Trees Rightmost derivation x2 G E OpEE E E y – * This evaluates as ( x – 2 ) * y

Derivations and Precedence These two derivations point out a problem with the grammar: It has no notion of precedence, or implied order of evaluation To add precedence Create a non-terminal for each level of precedence Isolate the corresponding part of the grammar Force the parser to recognize high precedence subexpressions first For algebraic expressions Multiplication and division, first (level one) Subtraction and addition, next (level two)

Derivations and Precedence Adding the standard algebraic precedence produces: This grammar is slightly larger Takes more rewriting to reach some of the terminal symbols Encodes expected precedence Produces same parse tree under leftmost & rightmost derivations Let’s see how it parses x - 2 * y level one level two

Derivations and Precedence The rightmost derivation This produces x%2 – ( 2 * y ), along with an appropriate parse tree. Both the leftmost and rightmost derivations give the same expression, because the grammar directly encodes the desired precedence. G E – E T F T T F F * Its parse tree

Ambiguous Grammars Our original expression grammar had other problems This grammar allows multiple leftmost derivations for x - 2 * y Hard to automate derivation if > 1 choice The grammar is ambiguous different choice than the first time

Two Leftmost Derivations for x – 2 * y The Difference:  Different productions chosen on the second step  Both derivations succeed in producing x - 2 * y Original choiceNew choice

Ambiguous Grammars Definitions If a grammar has more than one leftmost derivation for a single sentential form, the grammar is ambiguous If a grammar has more than one rightmost derivation for a single sentential form, the grammar is ambiguous The leftmost and rightmost derivations for a sentential form may differ, even in an unambiguous grammar Classic example — the if-then-else problem Stmt  if Expr then Stmt | if Expr then Stmt else Stmt | … other stmts … This ambiguity is entirely grammatical in nature

Ambiguity This sentential form has two derivations if Expr 1 then if Expr 2 then Stmt 1 else Stmt 2 then else if then if E1E1 E2E2 S2S2 S1S1 production 2, then production 1 then if then if E1E1 E2E2 S1S1 else S2S2 production 1, then production 2

Ambiguity Removing the ambiguity Must rewrite the grammar to avoid generating the problem Match each else to innermost unmatched if ( common sense rule ) With this grammar, the example has only one derivation

Ambiguity if Expr 1 then if Expr 2 then Stmt 1 else Stmt 2 This binds the else controlling S 2 to the inner if

Deeper Ambiguity Ambiguity usually refers to confusion in the CFG Overloading can create deeper ambiguity a = f(17) In many Algol-like languages, f could be either a function or a subscripted variable Disambiguating this one requires context Need values of declarations Really an issue of type, not context-free syntax Requires an extra-grammatical solution (not in CFG) Must handle these with a different mechanism Step outside grammar rather than use a more complex grammar

Ambiguity - the Final Word Ambiguity arises from two distinct sources Confusion in the context-free syntax (if-then-else) Confusion that requires context to resolve (overloading) Resolving ambiguity To remove context-free ambiguity, rewrite the grammar To handle context-sensitive ambiguity takes cooperation Knowledge of declarations, types, … Accept a superset of L(G) & check it by other means † This is a language design problem Sometimes, the compiler writer accepts an ambiguous grammar Parsing techniques that “do the right thing” i.e., always select the same derivation