1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4.

Slides:



Advertisements
Similar presentations
C O N T E X T - F R E E LANGUAGES ( use a grammar to describe a language) 1.
Advertisements

Lexical Analysis Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou.
Chapter 2 Lexical Analysis Nai-Wei Lin. Lexical Analysis Lexical analysis recognizes the vocabulary of the programming language and transforms a string.
1 Lexical Analysis Cheng-Chia Chen. 2 Outline l Introduction to lexical analyzer l Tokens l Regular expressions (RE) l Finite automata (FA) »deterministic.
Winter 2007SEG2101 Chapter 81 Chapter 8 Lexical Analysis.
Lexical Analysis III Recognizing Tokens Lecture 4 CS 4318/5331 Apan Qasem Texas State University Spring 2015.
Lecture 3-4 Notes by G. Necula, with additions by P. Hilfinger
Prof. Fateman CS 164 Lecture 51 Lexical Analysis Lecture 5.
2. Lexical Analysis Prof. O. Nierstrasz
Prof. Hilfinger CS 164 Lecture 21 Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger.
1 Foundations of Software Design Lecture 23: Finite Automata and Context-Free Grammars Marti Hearst Fall 2002.
COS 320 Compilers David Walker. Outline Last Week –Introduction to ML Today: –Lexical Analysis –Reading: Chapter 2 of Appel.
Lexical Analysis Recognize tokens and ignore white spaces, comments
Automating Construction of Lexers. Example in javacc TOKEN: { ( | | "_")* > | ( )* > | } SKIP: { " " | "\n" | "\t" } --> get automatically generated code.
Transparency No. 8-1 Formal Language and Automata Theory Chapter 8 DFA state minimization (lecture 13, 14)
Lexical Analysis The Scanner Scanner 1. Introduction A scanner, sometimes called a lexical analyzer A scanner : – gets a stream of characters (source.
CS 426 Compiler Construction
1 Scanning Aaron Bloomfield CS 415 Fall Parsing & Scanning In real compilers the recognizer is split into two phases –Scanner: translate input.
Chapter 3 Lexical Analysis
Topic #3: Lexical Analysis
CPSC 388 – Compiler Design and Construction Scanners – Finite State Automata.
Finite-State Machines with No Output Longin Jan Latecki Temple University Based on Slides by Elsa L Gunter, NJIT, and by Costas Busch Costas Busch.
Finite-State Machines with No Output
1 Chapter 3 Scanning – Theory and Practice. 2 Overview Formal notations for specifying the precise structure of tokens are necessary –Quoted string in.
1 Outline Informal sketch of lexical analysis –Identifies tokens in input string Issues in lexical analysis –Lookahead –Ambiguities Specifying lexers –Regular.
Lecture # 3 Chapter #3: Lexical Analysis. Role of Lexical Analyzer It is the first phase of compiler Its main task is to read the input characters and.
Topic #3: Lexical Analysis EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
COMP 3438 – Part II - Lecture 2: Lexical Analysis (I) Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ. 1.
Lexical Analyzer (Checker)
Overview of Previous Lesson(s) Over View  An NFA accepts a string if the symbols of the string specify a path from the start to an accepting state.
4b 4b Lexical analysis Finite Automata. Finite Automata (FA) FA also called Finite State Machine (FSM) –Abstract model of a computing entity. –Decides.
CS412/413 Introduction to Compilers Radu Rugina Lecture 4: Lexical Analyzers 28 Jan 02.
COMP3190: Principle of Programming Languages DFA and its equivalent, scanner.
1 November 1, November 1, 2015November 1, 2015November 1, 2015 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University Azusa.
CS 536 Fall Scanner Construction  Given a single string, automata and regular expressions retuned a Boolean answer: a given string is/is not in.
Compiler Construction 2 주 강의 Lexical Analysis. “get next token” is a command sent from the parser to the lexical analyzer. On receipt of the command,
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
Lexical Analysis Cheng-Chia Chen.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
Pembangunan Kompilator.  A recognizer for a language is a program that takes a string x, and answers “yes” if x is a sentence of that language, and.
Overview of Previous Lesson(s) Over View  Symbol tables are data structures that are used by compilers to hold information about source-program constructs.
Compiler Construction By: Muhammad Nadeem Edited By: M. Bilal Qureshi.
Lexical Analysis – Part II EECS 483 – Lecture 3 University of Michigan Wednesday, September 13, 2006.
Lexical Analysis.
1st Phase Lexical Analysis
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 2: Lexical Analysis.
Prof. Necula CS 164 Lecture 31 Lexical Analysis Lecture 3-4.
Chapter 2 Scanning. Dr.Manal AbdulazizCS463 Ch22 The Scanning Process Lexical analysis or scanning has the task of reading the source program as a file.
1 Topic 2: Lexing and Flexing COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer.
Overview of Previous Lesson(s) Over View  A token is a pair consisting of a token name and an optional attribute value.  A pattern is a description.
Chapter 5 Finite Automata Finite State Automata n Capable of recognizing numerous symbol patterns, the class of regular languages n Suitable for.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture 1 Ahmed Ezzat.
LECTURE 5 Scanning. SYNTAX ANALYSIS We know from our previous lectures that the process of verifying the syntax of the program is performed in two stages:
CS412/413 Introduction to Compilers Radu Rugina Lecture 3: Finite Automata 25 Jan 02.
COMP3190: Principle of Programming Languages DFA and its equivalent, scanner.
Lecture 2 Compiler Design Lexical Analysis By lecturer Noor Dhia
1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4.
Topic 3: Automata Theory 1. OutlineOutline Finite state machine, Regular expressions, DFA, NDFA, and their equivalence, Grammars and Chomsky hierarchy.
WELCOME TO A JOURNEY TO CS419 Dr. Hussien Sharaf Dr. Mohammad Nassef Department of Computer Science, Faculty of Computers and Information, Cairo University.
CS510 Compiler Lecture 2.
Chapter 3 Lexical Analysis.
Finite-State Machines (FSMs)
The time complexity for e-closure(T).
Two issues in lexical analysis
Recognizer for a Language
Lexical Analysis Lecture 3-4 Prof. Necula CS 164 Lecture 3.
Lexical Analysis.
Presentation transcript:

1 Lexical Analysis Cheng-Chia Chen

2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4. Use regular expressions in lexical specification 5. Finite automata (FA) »DFA and NFA »from RE to NFA »from NFA to DFA »from DFA to optimized DFA 6. Lexical-analyzer generators

3 1. The goal and niche of lexical analysis SourceTokensInterm. Language Lexical analysis Parsing Code Gen. Machine Code Optimization (token stream) (char stream) Goal of lexical analysis: breaking the input into individual words or “ tokens ”

4 Lexical Analysis l What do we want to do? Example: if (i == j) Z = 0; else Z = 1; l The input is just a sequence of characters: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; l Goal: Partition input string into substrings »And determine the categories (token types) to which the substrings belong

5 2. Lexical Tokens l What’s a token ? l Token attributes l Normal token and special tokens l Example of tokens and special tokens.

6 What ’ s a token? l a sequence of characters that can be treated as a unit in the grammar of a PL. Output of lexical analysis is a stream of tokens l Tokens are partitioned into categories called token types. ex: »In English: –book, students, like, help, strong, … : token - noun, verb, adjective, … : token type »In a programming language: –student, var34, 345, if, class, “ abc ” … : token –ID, Integer, IF, WHILE, Whitespace, … : token type l Parser relies on the token type instead of token distinctions to analyze: »var32 and var1 are treated the same, »var32(ID), 32(Integer) and if(IF) are treated differently.

7 Token attributes l token type : »category of the token; used by syntax analysis. »ex: identifier, integer, string, if, plus, … l token value : »semantic value used in semantic analysis. »ex: [integer, 26], [string, “26”] l token lexeme (member, text): »textual content of a token »[while, “while”], [identifier, “var23”], [plus, “+”],… l positional information: »start/end line/position of the textual content in the source program.

8 Notes on Token attributes l Token types affect syntax analysis l Token values affect semantic analysis l lexeme and positional information affect error handling l Only token type information must be supplied by the lexical analyzer. l Any program performing lexical analysis is called a scanner (lexer, lexical analyzer).

9 Aspects of Token types l Language view: A token type is the set of all lexemes of all its token instances. » ID = {a, ab, … } – {if, do,…}. » Integer = { 123, 456, …} » IF = {if}, WHILE={while}; » STRING={“abc”, “if”, “WHILE”,…} l Pattern (regular expression): a rule defining the language of all instances of a token type. »WHILE: w h i l e »ID: letter (letters | digits )* »ArithOp: + | - | * | /

10 Lexical Analyzer: Implementation l An implementation must do two things: 1.Recognize substrings corresponding to lexemes of tokens 2.Determine token attributes 1.type is necessary 2.value depends on the type/application, 3.lexeme/positional information depends on applications (eg: debug or not).

11 Example l input lines: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; l Token-lexeme pairs returned by the lexer: »[Whitespace, “ \t ” ] »[if, - ] »[OpenPar, “ ( “ ] »[Identifier, “ i ” ] »[Relation, “ == “ ] »[Identifier, “ j ” ] »…

12 Normal Tokens and special Tokens l Kinds of tokens »normal tokens: needed for later syntax analysis and must be passed to parser. »special tokens – skipped tokens (or nontoken): – do not contribute to parsing, – discarded by the scanner. l Examples: Whitespace, Comments »why need them ? l Question: What happens if we remove all whitespace and all comments prior to scanning?

13 Lexical Analysis in FORTRAN l FORTRAN rule: Whitespace is insignificant E.g., VAR1 is the same as VA R1 l Footnote: FORTRAN whitespace rule motivated by inaccuracy of punch card operators

14 A terrible design! Example l Consider »DO 5 I = 1,25 »DO 5 I = 1.25 The first is DO 5 I = 1, 25 The second is DO5I = 1.25 Reading left-to-right, cannot tell if DO5I is a variable or DO stmt. until after “, ” is reached

15 Lexical Analysis in FORTRAN. Lookahead. l Two important points: 1.The goal is to partition the string. This is implemented by reading left-to-right, recognizing one token at a time 2.“ Lookahead ” may be required to decide where one token ends and the next token begins »Even our simple example has lookahead issues i vs. if = vs. ==

16 Some token types of a typical PL TypeExamples IDfoo n14 last NUM REAL e67 1.5e- 10 IFif COMMA, NOTEQ!= LPAREN( RPAREN)

17 Some Special Tokens l 1,5 are skipped. 2,3 need preprocess, l 4 need to be expanded. 1. comment/* … */ // … 2. preprocessor directive #include 3.#define NUMS 5,6 4. macroNUMS 5.blank,tabs,newlines \t \n

18 3. Regular expressions and Regular Languages

19 The geography of lexical tokens ID: var1, last5,… REAL e –10 … NUM IF:if LPAREN ( RPAREN ) special tokens : \t \n /* … */ the set of all strings

20 Issues l Definition problem: »how to define (formally specify) the set of strings(tokens) belonging to a token type ? »=> regular expressions l (Recognition problem) »How to determine which set (token type) a input string belongs to? »=> DFA!

21 Languages Def. Let S be a set of symbols (or characters). A language over S is a set of strings of characters drawn from S (  is called the alphabet )

22 Examples of Languages l Alphabet = English characters l Language = English words l Not every string on English characters is an English word »likes, school, … »beee,yykk, … l Alphabet = ASCII l Language = C programs l Note: ASCII character set is different from English character set

23 Regular Expressions l A language (metaLanguage) for representing (or defining) languages(sets of words) Definition: If  is an alphabet. The set of regular expression(RegExpr) over  is defined recursively as follows: »(Atomic RegExpr) : 1. any symbol c   is a RegExpr. » 2.  (empty string) is a RegExpr. »(Compound RegExpr): if A and B are RegExpr, then so are 3. (A | B) (alternation) 4. (A  B) (concatenation) 5. A* (repetition)

24 Semantics (Meaning) of regular expressions l For each regular expression A, we use L(A) to express the language defined by A. l I.e. L is the function: L: RegExpr(  )  the set of Languages over  with L(A) = the language denoted by RegExpr A l The meaning of RegExpr can be made clear by explicitly defining L.

25 Atomic Regular Expressions l 1. Single symbol: c L( c) = { c } (for any c   ) 2. Epsilon (empty string):  L(  ) = {  }

26 Compound Regular Expressions l 3. alternation ( or union or choice) L( ( A | B) ) = { s | s  L(A) or s  L(B) } l 4. Concatenation: AB (where A and B are reg. exp.) L((A  B)) =L(A)  L(B) = def {    |  L(A) and   L(B) } l Note: »Parentheses enclosing (A|B) and (AB) can be omitted if there is no worries of confusion. »M  N (set concatenation) and    (string concatenation) will be abbreviated to AB and , respectively. »A  A and L(A)  L(A) are abbreviated as A 2 and L(A) 2, respectively.

27 Examples l if | then | else  { if, then, else} 0 | 1 | … | 9  { 0, 1, …, 9 } l (0 | 1) (0 | 1)  { 00, 01, 10, 11 }

28 More Compound Regular Expressions l 5. repetition ( or Iteration): A * L(A * ) = {  }  L(A)  L(A)L(A)  L(A) 3  … l Examples: »0 * : { , 0, 00, 000, … } »10 * : strings starting with 1 and followed by 0 ’ s. »(0|1)* 0 : Binary even numbers. »(a|b)*aa(a|b)*: strings of a ’ s and b ’ s containing consecutive a ’ s. »b*(abb*)*(a|  ) : strings of a ’ s and b ’ s with no consecutive a ’ s.

29 Example: Keyword »Keyword: else or if or begin … else | if | begin | …

30 Example: Integers Integer: a non-empty string of digits ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ) ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )* l problem: reuse complicated expression l improvement: define intermediate reg. expr. digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 number = digit digit * Abbreviation: A + = A A *

31 Regular Definitions l Names for regular expressions »d 1 =  r 1 »d 2 =  r 2 »... »d n =  r n where r i over alphabet  {d 1, d 2,..., d i-1 } l note: Recursion is not allowed.

32 Example »Identifier: strings of letters or digits, starting with a letter digit = 0 | 1 |... | 9 letter = A | … | Z | a | … | z identifier = letter (letter | digit) * »Is (letter* | digit*) the same ?

33 Example: Whitespace Whitespace: a non-empty sequence of blanks, newlines, CRNL and tabs WS = (\ | \t | \n | \r\n ) +

34 Example: Addresses l Consider  = letters [ } name = letter + address = name ’ name ( ‘. ’ name) *

35 Notational Shorthands l One or more instances »r+ = r r* »r* = (r+ |  l Zero or one instance »r? = (r |  l Character classes »[abc] = a | b | c »[a-z] = a | b |... | z »[ac-f] = a | c | d | e | f »[^ac-f] =  – [ac-f]

36 Summary l Regular expressions describe many useful languages l Regular languages are a language specification »We still need an implementation l problem: Given a string s and a rexp R, is

37 4. Use Regular expressions in lexical specification

38 Goal l Specifying lexical structure using regular expressions

39 Regular Expressions in Lexical Specification l Last lecture: the specification of all lexemes in a token type using regular expression. l But we want a specification of all lexemes of all token types in a programming language. »Which may enable us to partition the input into lexemes l We will adapt regular expressions to this goal

40 Regular Expressions => Lexical Spec. (1) 1. Select a set of token types Number, Keyword, Identifier, Write a rexp for the lexemes of each token type Number = digit + Keyword = if | else | … Identifier = letter (letter | digit)* LParen = ‘ ( ‘ …

41 Regular Expressions => Lexical Spec. (2) 3. Construct R, matching all lexemes for all tokens R = Keyword | Identifier | Number | … = R 1 | R 2 | R 3 + … Facts: If s  L(R) then s is a lexeme »Furthermore s  L(R i ) for some “ i ” »This “ i ” determines the token type that is reported

42 Regular Expressions => Lexical Spec. (3) 4. Let the input be x 1 … x n (x 1... x n are symbols in the language alphabet) For 1  i  n check x 1 … x i  L(R) ? 5. It must be that x 1 … x i  L(R j ) for some j 6. Remove t = x 1 … x i from input if t is normal token, then pass it to the parser // else it is whitespace or comments, just skip it! 7.go to (4)

43 Ambiguities (1) l There are ambiguities in the algorithm l How much input is used? What if – x 1 … x i  L(R) and also – x 1 … x K  L(R) for some i != k. l Rule: Pick the longest possible substring »The longest match principle !!

44 Ambiguities (2) l Which token is used? What if – x 1 … x i  L(R j ) and also – x 1 … x i  L(R k ) l Rule: use rule listed first (j iff j < k) »Earlier rule first! l Example: »R 1 = Keyword and R 2 = Identifier »“ if ” matches both. »Treats “ if ” as a keyword not an identifier

45 Error Handling l What if No rule matches a prefix of input ? Problem: Can ’ t just get stuck … l Solution: »Write a rule matching all “ bad ” strings »Put it last l Lexer tools allow the writing of: R = R 1 |... | R n | Error »Token Error matches if nothing else matches

46 Summary l Regular expressions provide a concise notation for string patterns l Use in lexical analysis requires small extensions »To resolve ambiguities »To handle errors l Efficient algorithms exist (next) »Require only single pass over the input »Few operations per character (table lookup)

47 5. Finite Automata l Regular expressions = specification l Finite automata = implementation l A finite automaton consists of »An input alphabet  »A finite set of states S »A start state n »A set of accepting states F  S »A set of transitions state  input state

48 Finite Automata l Transition s1 a s2s1 a s2 l Is read In state s 1 on input “ a ” go to state s 2 l If end of input (or no transition possible) »If in accepting state => accept »Otherwise => reject

49 Finite Automata State Transition Graphs l A state The start state An accepting state A transition a

50 A Simple Example A finite automaton that accepts only “ 1 ” 1

51 Another Simple Example A finite automaton accepting any number of 1 ’ s followed by a single 0 l Alphabet: {0,1} 0 1 accepted input: 1*0

52 And Another Example l Alphabet {0,1} l What language does this recognize? accepted inputs: to be answered later!

53 And Another Example l Alphabet still { 0, 1 } l The operation of the automaton is not completely defined by the input »On input “ 11 ” the automaton could be in either state 1 1

54 Epsilon Moves l Another kind of transition:  -moves  Machine can move from state A to state B without reading input A B

55 Deterministic and Nondeterministic Automata l Deterministic Finite Automata (DFA) »One transition per input per state »No  -moves l Nondeterministic Finite Automata (NFA) »Can have multiple transitions for one input in a given state »Can have  -moves l Finite automata can have only a finite number of states.

56 Execution of Finite Automata l A DFA can take only one path through the state graph »Completely determined by input l NFAs can choose »Whether to make  -moves »Which of multiple transitions for a single input to take

57 Acceptance of NFAs l An NFA can get into multiple states Input: Rule: NFA accepts if it can get in a final state

58 Acceptance of a Finite Automata l A FA (DFA or NFA) accepts an input string s iff there is some path in the transition diagram from the start state to some final state such that the edge labels along this path spell out s

59 NFA vs. DFA (1) l NFAs and DFAs recognize the same set of languages (regular languages) l DFAs are easier to implement »There are no choices to consider

60 NFA vs. DFA (2) l For a given language the NFA can be simpler than the DFA NFA DFA DFA can be exponentially larger than NFA

61 Operations on NFA states  -closure(s): set of NFA states reachable from NFA state s on  -transitions alone  -closure(S): set of NFA states reachable from some NFA state s in S on  -transitions alone l move(S, c): set of NFA states to which there is a transition on input symbol c from some NFA state s in S notes: »  -closure(S) = U s ∈ S  -closure(s); »  -closure(s) =  -closure({s}); »  -closure(S) = ?

62 Computing  -closure l Input. An NFA and a set of NFA states S. Output. E =  -closure(S). begin push all states in S onto stack; T := S; while stack is not empty do begin pop t, the top element, off of stack; for each state u with an edge from t to u labeled  do if u is not in T do begin add u to T; push u onto stack end end; return T end.

63 Simulating an NFA l Input. An input string ended with eof and an NFA with start state s0 and final states F. l Output. The answer “yes” if accepts, “no” otherwise. begin S :=  -closure({s0}); c := next_symbol(); while c != eof do begin S :=  -closure(move(S, c)); c := next_symbol(); end; if S  F !=  then return “yes” else return “no” end.

64 Regular Expressions to Finite Automata l High-level sketch Regular expressions NFADFA Lexical Specification Table-driven Implementation of DFA Optimized DFA

65 Regular Expressions to NFA (1) l For each kind of rexp, define an NFA »Notation: NFA for rexp A A For   For input a a

66 Regular Expressions to NFA (2) l For AB A B  For A | B A B    

67 Regular Expressions to NFA (3) l For A* A   

68 Example of RegExp -> NFA conversion l Consider the regular expression (1|0)*1 l The NFA is  1 C E 0 DF   B   G    A H 1 IJ

69 NFA to DFA

70 Regular Expressions to Finite Automata l High-level sketch Regular expressions NFADFA Lexical Specification Table-driven Implementation of DFA Optimized DFA

71 RegExp -> NFA : an Examlpe l Consider the regular expression (1+0)*1 l The NFA is  1 C E 0 DF   B   G    A H 1 IJ

72 NFA to DFA. The Trick l Simulate the NFA l Each state of DFA = a non-empty subset of states of the NFA l Start state = the set of NFA states reachable through  - moves from NFA start states Add a transition S  a S ’ to DFA iff »S ’ is the set of NFA states reachable from any state in S after seeing the input a –considering  -moves as well

73 NFA -> DFA Example         A B C D E F G H IJ ABCDHI FGABCDHI EJGABCDHI

74 NFA to DFA. Remark l An NFA may be in many states at any time l How many different states ? l If there are N states, the NFA must be in some subset of those N states l How many non-empty subsets are there? »2 N - 1 = finitely many

75 From an NFA to a DFA l Subset construction Algorithm. l Input. An NFA N. l Output. A DFA D with states S and transition table mv. begin add  -closure(s0) as an unmarked state to S; while there is an unmarked state T in S do begin mark T; for each input symbol a do begin U :=  -closure(move(T, a)); if U is not in S then add U as an unmarked state to S; mv[T, a] := U end end end.

76 Implementation l A DFA can be implemented by a 2D table T »One dimension is “ states ” »Other dimension is “ input symbols ” »For every transition S i  a S k define mv[i,a] = k DFA “ execution ” »If in state S i and input a, read mv[i,a] (= k) and move to state S k »Very efficient

77 Table Implementation of a DFA S T U STU TTU UTU

78 Simulation of a DFA l Input. An input string ended with eof and a DFA with start state s0 and final states F. Output. The answer “yes” if accepts, “no” otherwise. begin s := s0; c := next_symbol(); while c <> eof do begin s := mv(s, c); c := next_symbol() end; if s is in F then return “yes” else return “no” end.

79 Implementation (Cont.) l NFA -> DFA conversion is at the heart of tools such as flex l But, DFAs can be huge »DFA => optimized DFA : try to decrease the number of states. »not always helpful! l In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations

80 Time-Space Tradeoffs l RE to NFA, simulate NFA »time: O(|r| |x|), space: O(|r|) l RE to NFA, NFA to DFA, simulate DFA »time: O(|x|), space: O(2 |r| ) l Lazy transition evaluation »transitions are computed as needed at run time; »computed transitions are stored in cache for later use

81 DFA to optimized DFA

82 Motivations Problems: 1. Given a DFA M with k states, is it possible to find an equivalent DFA M’ (I.e., L(M) = L(M’)) with state number fewer than k ? 2. Given a regular language A, how to find a machine with minimum number of states ? Ex: A = L((a+b)*aba(a+b)*) can be accepted by the following NFA: By applying the subset construction, we can construct a DFA M2 with 2 4 =16 states, of which only 6 are accessible from the initial state {s}. stuv ab a a,b

83 Inaccessible states l A state p  Q is said to be inaccessible (or unreachable) [from the initial state] if there exists no path from from the initial state to it. If a state is not inaccessible, it is accessible. l Inaccessible states can be removed from the DFA without affecting the behavior of the machine. l Problem: Given a DFA (or NFA), how to find all inaccessible states ?

84 Finding all accessible states: (like e-closure) l Input. An FA (DFA or NFA) l Output. the set of all accessible states begin push all start states onto stack; Add all start states into A; while stack is not empty do begin pop t, the top element, off of stack; for each state u with an edge from t to u  do if u is not in A do begin add u to A; push u onto stack end end; return A end.

85 Minimization process l Minimization process for a DFA: »1. Remove all inaccessible states »2. Collapse all equivalent states l What does it mean that two states are equivalent? »both states have the same observable behaviors.i.e., »there is no way to distinguish their difference, or »more formally, we say p and q are not equivalent(or distinguishable) iff there is a string x  * s.t. exactly one of  (p,x) and  (q,x) is a final state, »where  (p,x) is the ending state of the path from p with x as the input. l Equivalents sates can be merged to form a simpler machine.

a a a,b a b bb ,23,4 a,b Example:

87 Quotient Construction M=(Q, , ,s,F): a DFA.  : a relation on Q defined by: p  q for all x   *  (p,x)  F iff  (q,x)  F Property:  is an equivalence relation. l Hence it partitions Q into equivalence classes [p] = {q  Q | p  q} for p  Q. and the quotient set Q/  = {[p] | p  Q}. Every p  Q belongs to exactly one class [p] and p  q iff [p]=[q]. Define the quotient machine M/  = where »Q’=Q/  ; s’=[s]; F’={[p] | p  F}; and  ’([p],a)=[  (p,a)] for all p  Q and a  .

88 Minimization algorithm l input: a DFA l output: a optimized DFA 1. Write down a table of all pairs {p,q}, initially unmarked. 2. mark {p,q} if p ∈ F and q  F or vice versa. 3. Repeat until no more change: 3.1 if ∃ unmarked pair {p,q} s.t. {move(p,a), move(q,a)} is marked for some a ∈ S, then mark {p,q}. 4. When done, p  q iff {p,q} is not marked. 5. merge all equivalent states into one class and return the resulting machine

89 An Example: l The DFA: ab >012 1F1F34 2F2F F5F55

90 Initial Table

91 After step 2 1M 2M- 3-MM 4-MM- 5M--MM 01234

92 After first pass of step 3 1M 2M- 3-MM 4-MM- 5MMMMM 01234

93 2nd pass of step 3. The result : 1  2 and 3  4. 1M 2M- 3M2MM 4 MM- 5MM1 MM 01234