Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4.

Similar presentations


Presentation on theme: "1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4."— Presentation transcript:

1 1 Lexical Analysis Cheng-Chia Chen

2 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4. Use regular expressions in lexical specification 5. Finite automata (FA) »DFA and NFA »from RE to NFA »from NFA to DFA »from DFA to optimized DFA 6. Lexical-analyzer generators

3 3 1. The goal and niche of lexical analysis SourceTokensInterm. Language Lexical analysis Parsing Code Gen. Machine Code Optimization (token stream) (char stream) Goal of lexical analysis: breaking the input into individual words or “ tokens ”

4 4 Lexical Analysis l What do we want to do? Example: if (i == j) Z = 0; else Z = 1; l The input is just a sequence of characters: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; l Goal: Partition input string into substrings »And determine the categories (token types) to which the substrings belong

5 5 2. Lexical Tokens l What’s a token ? l Token attributes l Normal token and special tokens l Example of tokens and special tokens.

6 6 What ’ s a token? l a sequence of characters that can be treated as a unit in the grammar of a PL. Output of lexical analysis is a stream of tokens l Tokens are partitioned into categories called token types. ex: »In English: –book, students, like, help, strong, … : token - noun, verb, adjective, … : token type »In a programming language: –student, var34, 345, if, class, “ abc ” … : token –ID, Integer, IF, WHILE, Whitespace, … : token type l Parser relies on the token type instead of token distinctions to analyze: »var32 and var1 are treated the same, »var32(ID), 32(Integer) and if(IF) are treated differently.

7 7 Token attributes l token type : »category of the token; used by syntax analysis. »ex: identifier, integer, string, if, plus, … l token value : »semantic value used in semantic analysis. »ex: [integer, 26], [string, “26”] l token lexeme (member, text): »textual content of a token »[while, “while”], [identifier, “var23”], [plus, “+”],… l positional information: »start/end line/position of the textual content in the source program.

8 8 Notes on Token attributes l Token types affect syntax analysis l Token values affect semantic analysis l lexeme and positional information affect error handling l Only token type information must be supplied by the lexical analyzer. l Any program performing lexical analysis is called a scanner (lexer, lexical analyzer).

9 9 Aspects of Token types l Language view: A token type is the set of all lexemes of all its token instances. » ID = {a, ab, … } – {if, do,…}. » Integer = { 123, 456, …} » IF = {if}, WHILE={while}; » STRING={“abc”, “if”, “WHILE”,…} l Pattern (regular expression): a rule defining the language of all instances of a token type. »WHILE: w h i l e »ID: letter (letters | digits )* »ArithOp: + | - | * | /

10 10 Lexical Analyzer: Implementation l An implementation must do two things: 1.Recognize substrings corresponding to lexemes of tokens 2.Determine token attributes 1.type is necessary 2.value depends on the type/application, 3.lexeme/positional information depends on applications (eg: debug or not).

11 11 Example l input lines: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; l Token-lexeme pairs returned by the lexer: »[Whitespace, “ \t ” ] »[if, - ] »[OpenPar, “ ( “ ] »[Identifier, “ i ” ] »[Relation, “ == “ ] »[Identifier, “ j ” ] »…

12 12 Normal Tokens and special Tokens l Kinds of tokens »normal tokens: needed for later syntax analysis and must be passed to parser. »special tokens – skipped tokens (or nontoken): – do not contribute to parsing, – discarded by the scanner. l Examples: Whitespace, Comments »why need them ? l Question: What happens if we remove all whitespace and all comments prior to scanning?

13 13 Lexical Analysis in FORTRAN l FORTRAN rule: Whitespace is insignificant E.g., VAR1 is the same as VA R1 l Footnote: FORTRAN whitespace rule motivated by inaccuracy of punch card operators

14 14 A terrible design! Example l Consider »DO 5 I = 1,25 »DO 5 I = 1.25 The first is DO 5 I = 1, 25 The second is DO5I = 1.25 Reading left-to-right, cannot tell if DO5I is a variable or DO stmt. until after “, ” is reached

15 15 Lexical Analysis in FORTRAN. Lookahead. l Two important points: 1.The goal is to partition the string. This is implemented by reading left-to-right, recognizing one token at a time 2.“ Lookahead ” may be required to decide where one token ends and the next token begins »Even our simple example has lookahead issues i vs. if = vs. ==

16 16 Some token types of a typical PL TypeExamples IDfoo n14 last NUM73 0 00 515 082 REAL66.1.5 10. 1e67 1.5e- 10 IFif COMMA, NOTEQ!= LPAREN( RPAREN)

17 17 Some Special Tokens l 1,5 are skipped. 2,3 need preprocess, l 4 need to be expanded. 1. comment/* … */ // … 2. preprocessor directive #include 3.#define NUMS 5,6 4. macroNUMS 5.blank,tabs,newlines \t \n

18 18 3. Regular expressions and Regular Languages

19 19 The geography of lexical tokens ID: var1, last5,… REAL 12.35 2.4 e –10 … NUM 23 56 0 000 IF:if LPAREN ( RPAREN ) special tokens : \t \n /* … */ the set of all strings

20 20 Issues l Definition problem: »how to define (formally specify) the set of strings(tokens) belonging to a token type ? »=> regular expressions l (Recognition problem) »How to determine which set (token type) a input string belongs to? »=> DFA!

21 21 Languages Def. Let S be a set of symbols (or characters). A language over S is a set of strings of characters drawn from S (  is called the alphabet )

22 22 Examples of Languages l Alphabet = English characters l Language = English words l Not every string on English characters is an English word »likes, school, … »beee,yykk, … l Alphabet = ASCII l Language = C programs l Note: ASCII character set is different from English character set

23 23 Regular Expressions l A language (metaLanguage) for representing (or defining) languages(sets of words) Definition: If  is an alphabet. The set of regular expression(RegExpr) over  is defined recursively as follows: »(Atomic RegExpr) : 1. any symbol c   is a RegExpr. » 2.  (empty string) is a RegExpr. »(Compound RegExpr): if A and B are RegExpr, then so are 3. (A | B) (alternation) 4. (A  B) (concatenation) 5. A* (repetition)

24 24 Semantics (Meaning) of regular expressions l For each regular expression A, we use L(A) to express the language defined by A. l I.e. L is the function: L: RegExpr(  )  the set of Languages over  with L(A) = the language denoted by RegExpr A l The meaning of RegExpr can be made clear by explicitly defining L.

25 25 Atomic Regular Expressions l 1. Single symbol: c L( c) = { c } (for any c   ) 2. Epsilon (empty string):  L(  ) = {  }

26 26 Compound Regular Expressions l 3. alternation ( or union or choice) L( ( A | B) ) = { s | s  L(A) or s  L(B) } l 4. Concatenation: AB (where A and B are reg. exp.) L((A  B)) =L(A)  L(B) = def {    |  L(A) and   L(B) } l Note: »Parentheses enclosing (A|B) and (AB) can be omitted if there is no worries of confusion. »M  N (set concatenation) and    (string concatenation) will be abbreviated to AB and , respectively. »A  A and L(A)  L(A) are abbreviated as A 2 and L(A) 2, respectively.

27 27 Examples l if | then | else  { if, then, else} 0 | 1 | … | 9  { 0, 1, …, 9 } l (0 | 1) (0 | 1)  { 00, 01, 10, 11 }

28 28 More Compound Regular Expressions l 5. repetition ( or Iteration): A * L(A * ) = {  }  L(A)  L(A)L(A)  L(A) 3  … l Examples: »0 * : { , 0, 00, 000, … } »10 * : strings starting with 1 and followed by 0 ’ s. »(0|1)* 0 : Binary even numbers. »(a|b)*aa(a|b)*: strings of a ’ s and b ’ s containing consecutive a ’ s. »b*(abb*)*(a|  ) : strings of a ’ s and b ’ s with no consecutive a ’ s.

29 29 Example: Keyword »Keyword: else or if or begin … else | if | begin | …

30 30 Example: Integers Integer: a non-empty string of digits ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ) ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )* l problem: reuse complicated expression l improvement: define intermediate reg. expr. digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 number = digit digit * Abbreviation: A + = A A *

31 31 Regular Definitions l Names for regular expressions »d 1 =  r 1 »d 2 =  r 2 »... »d n =  r n where r i over alphabet  {d 1, d 2,..., d i-1 } l note: Recursion is not allowed.

32 32 Example »Identifier: strings of letters or digits, starting with a letter digit = 0 | 1 |... | 9 letter = A | … | Z | a | … | z identifier = letter (letter | digit) * »Is (letter* | digit*) the same ?

33 33 Example: Whitespace Whitespace: a non-empty sequence of blanks, newlines, CRNL and tabs WS = (\ | \t | \n | \r\n ) +

34 34 Example: Email Addresses l Consider chencc@cs.nccu.edu.tw  = letters [ {., @ } name = letter + address = name ‘ @ ’ name ( ‘. ’ name) *

35 35 Notational Shorthands l One or more instances »r+ = r r* »r* = (r+ |  l Zero or one instance »r? = (r |  l Character classes »[abc] = a | b | c »[a-z] = a | b |... | z »[ac-f] = a | c | d | e | f »[^ac-f] =  – [ac-f]

36 36 Summary l Regular expressions describe many useful languages l Regular languages are a language specification »We still need an implementation l problem: Given a string s and a rexp R, is

37 37 4. Use Regular expressions in lexical specification

38 38 Goal l Specifying lexical structure using regular expressions

39 39 Regular Expressions in Lexical Specification l Last lecture: the specification of all lexemes in a token type using regular expression. l But we want a specification of all lexemes of all token types in a programming language. »Which may enable us to partition the input into lexemes l We will adapt regular expressions to this goal

40 40 Regular Expressions => Lexical Spec. (1) 1. Select a set of token types Number, Keyword, Identifier,... 2. Write a rexp for the lexemes of each token type Number = digit + Keyword = if | else | … Identifier = letter (letter | digit)* LParen = ‘ ( ‘ …

41 41 Regular Expressions => Lexical Spec. (2) 3. Construct R, matching all lexemes for all tokens R = Keyword | Identifier | Number | … = R 1 | R 2 | R 3 + … Facts: If s  L(R) then s is a lexeme »Furthermore s  L(R i ) for some “ i ” »This “ i ” determines the token type that is reported

42 42 Regular Expressions => Lexical Spec. (3) 4. Let the input be x 1 … x n (x 1... x n are symbols in the language alphabet) For 1  i  n check x 1 … x i  L(R) ? 5. It must be that x 1 … x i  L(R j ) for some j 6. Remove t = x 1 … x i from input if t is normal token, then pass it to the parser // else it is whitespace or comments, just skip it! 7.go to (4)

43 43 Ambiguities (1) l There are ambiguities in the algorithm l How much input is used? What if – x 1 … x i  L(R) and also – x 1 … x K  L(R) for some i != k. l Rule: Pick the longest possible substring »The longest match principle !!

44 44 Ambiguities (2) l Which token is used? What if – x 1 … x i  L(R j ) and also – x 1 … x i  L(R k ) l Rule: use rule listed first (j iff j < k) »Earlier rule first! l Example: »R 1 = Keyword and R 2 = Identifier »“ if ” matches both. »Treats “ if ” as a keyword not an identifier

45 45 Error Handling l What if No rule matches a prefix of input ? Problem: Can ’ t just get stuck … l Solution: »Write a rule matching all “ bad ” strings »Put it last l Lexer tools allow the writing of: R = R 1 |... | R n | Error »Token Error matches if nothing else matches

46 46 Summary l Regular expressions provide a concise notation for string patterns l Use in lexical analysis requires small extensions »To resolve ambiguities »To handle errors l Efficient algorithms exist (next) »Require only single pass over the input »Few operations per character (table lookup)

47 47 5. Finite Automata l Regular expressions = specification l Finite automata = implementation l A finite automaton consists of »An input alphabet  »A finite set of states S »A start state n »A set of accepting states F  S »A set of transitions state  input state

48 48 Finite Automata l Transition s1 a s2s1 a s2 l Is read In state s 1 on input “ a ” go to state s 2 l If end of input (or no transition possible) »If in accepting state => accept »Otherwise => reject

49 49 Finite Automata State Transition Graphs l A state The start state An accepting state A transition a

50 50 A Simple Example A finite automaton that accepts only “ 1 ” 1

51 51 Another Simple Example A finite automaton accepting any number of 1 ’ s followed by a single 0 l Alphabet: {0,1} 0 1 accepted input: 1*0

52 52 And Another Example l Alphabet {0,1} l What language does this recognize? 0 1 0 1 0 1 accepted inputs: to be answered later!

53 53 And Another Example l Alphabet still { 0, 1 } l The operation of the automaton is not completely defined by the input »On input “ 11 ” the automaton could be in either state 1 1

54 54 Epsilon Moves l Another kind of transition:  -moves  Machine can move from state A to state B without reading input A B

55 55 Deterministic and Nondeterministic Automata l Deterministic Finite Automata (DFA) »One transition per input per state »No  -moves l Nondeterministic Finite Automata (NFA) »Can have multiple transitions for one input in a given state »Can have  -moves l Finite automata can have only a finite number of states.

56 56 Execution of Finite Automata l A DFA can take only one path through the state graph »Completely determined by input l NFAs can choose »Whether to make  -moves »Which of multiple transitions for a single input to take

57 57 Acceptance of NFAs l An NFA can get into multiple states Input: 0 1 1 0 101 Rule: NFA accepts if it can get in a final state

58 58 Acceptance of a Finite Automata l A FA (DFA or NFA) accepts an input string s iff there is some path in the transition diagram from the start state to some final state such that the edge labels along this path spell out s

59 59 NFA vs. DFA (1) l NFAs and DFAs recognize the same set of languages (regular languages) l DFAs are easier to implement »There are no choices to consider

60 60 NFA vs. DFA (2) l For a given language the NFA can be simpler than the DFA 0 1 0 0 0 1 0 1 0 1 NFA DFA DFA can be exponentially larger than NFA

61 61 Operations on NFA states  -closure(s): set of NFA states reachable from NFA state s on  -transitions alone  -closure(S): set of NFA states reachable from some NFA state s in S on  -transitions alone l move(S, c): set of NFA states to which there is a transition on input symbol c from some NFA state s in S notes: »  -closure(S) = U s ∈ S  -closure(s); »  -closure(s) =  -closure({s}); »  -closure(S) = ?

62 62 Computing  -closure l Input. An NFA and a set of NFA states S. Output. E =  -closure(S). begin push all states in S onto stack; T := S; while stack is not empty do begin pop t, the top element, off of stack; for each state u with an edge from t to u labeled  do if u is not in T do begin add u to T; push u onto stack end end; return T end.

63 63 Simulating an NFA l Input. An input string ended with eof and an NFA with start state s0 and final states F. l Output. The answer “yes” if accepts, “no” otherwise. begin S :=  -closure({s0}); c := next_symbol(); while c != eof do begin S :=  -closure(move(S, c)); c := next_symbol(); end; if S  F !=  then return “yes” else return “no” end.

64 64 Regular Expressions to Finite Automata l High-level sketch Regular expressions NFADFA Lexical Specification Table-driven Implementation of DFA Optimized DFA

65 65 Regular Expressions to NFA (1) l For each kind of rexp, define an NFA »Notation: NFA for rexp A A For   For input a a

66 66 Regular Expressions to NFA (2) l For AB A B  For A | B A B    

67 67 Regular Expressions to NFA (3) l For A* A   

68 68 Example of RegExp -> NFA conversion l Consider the regular expression (1|0)*1 l The NFA is  1 C E 0 DF   B   G    A H 1 IJ

69 69 NFA to DFA

70 70 Regular Expressions to Finite Automata l High-level sketch Regular expressions NFADFA Lexical Specification Table-driven Implementation of DFA Optimized DFA

71 71 RegExp -> NFA : an Examlpe l Consider the regular expression (1+0)*1 l The NFA is  1 C E 0 DF   B   G    A H 1 IJ

72 72 NFA to DFA. The Trick l Simulate the NFA l Each state of DFA = a non-empty subset of states of the NFA l Start state = the set of NFA states reachable through  - moves from NFA start states Add a transition S  a S ’ to DFA iff »S ’ is the set of NFA states reachable from any state in S after seeing the input a –considering  -moves as well

73 73 NFA -> DFA Example 1 0 1         A B C D E F G H IJ ABCDHI FGABCDHI EJGABCDHI 0 1 0 1 0 1

74 74 NFA to DFA. Remark l An NFA may be in many states at any time l How many different states ? l If there are N states, the NFA must be in some subset of those N states l How many non-empty subsets are there? »2 N - 1 = finitely many

75 75 From an NFA to a DFA l Subset construction Algorithm. l Input. An NFA N. l Output. A DFA D with states S and transition table mv. begin add  -closure(s0) as an unmarked state to S; while there is an unmarked state T in S do begin mark T; for each input symbol a do begin U :=  -closure(move(T, a)); if U is not in S then add U as an unmarked state to S; mv[T, a] := U end end end.

76 76 Implementation l A DFA can be implemented by a 2D table T »One dimension is “ states ” »Other dimension is “ input symbols ” »For every transition S i  a S k define mv[i,a] = k DFA “ execution ” »If in state S i and input a, read mv[i,a] (= k) and move to state S k »Very efficient

77 77 Table Implementation of a DFA S T U 0 1 0 1 0 1 01 STU TTU UTU

78 78 Simulation of a DFA l Input. An input string ended with eof and a DFA with start state s0 and final states F. Output. The answer “yes” if accepts, “no” otherwise. begin s := s0; c := next_symbol(); while c <> eof do begin s := mv(s, c); c := next_symbol() end; if s is in F then return “yes” else return “no” end.

79 79 Implementation (Cont.) l NFA -> DFA conversion is at the heart of tools such as flex l But, DFAs can be huge »DFA => optimized DFA : try to decrease the number of states. »not always helpful! l In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations

80 80 Time-Space Tradeoffs l RE to NFA, simulate NFA »time: O(|r| |x|), space: O(|r|) l RE to NFA, NFA to DFA, simulate DFA »time: O(|x|), space: O(2 |r| ) l Lazy transition evaluation »transitions are computed as needed at run time; »computed transitions are stored in cache for later use

81 81 DFA to optimized DFA

82 82 Motivations Problems: 1. Given a DFA M with k states, is it possible to find an equivalent DFA M’ (I.e., L(M) = L(M’)) with state number fewer than k ? 2. Given a regular language A, how to find a machine with minimum number of states ? Ex: A = L((a+b)*aba(a+b)*) can be accepted by the following NFA: By applying the subset construction, we can construct a DFA M2 with 2 4 =16 states, of which only 6 are accessible from the initial state {s}. stuv ab a a,b

83 83 Inaccessible states l A state p  Q is said to be inaccessible (or unreachable) [from the initial state] if there exists no path from from the initial state to it. If a state is not inaccessible, it is accessible. l Inaccessible states can be removed from the DFA without affecting the behavior of the machine. l Problem: Given a DFA (or NFA), how to find all inaccessible states ?

84 84 Finding all accessible states: (like e-closure) l Input. An FA (DFA or NFA) l Output. the set of all accessible states begin push all start states onto stack; Add all start states into A; while stack is not empty do begin pop t, the top element, off of stack; for each state u with an edge from t to u  do if u is not in A do begin add u to A; push u onto stack end end; return A end.

85 85 Minimization process l Minimization process for a DFA: »1. Remove all inaccessible states »2. Collapse all equivalent states l What does it mean that two states are equivalent? »both states have the same observable behaviors.i.e., »there is no way to distinguish their difference, or »more formally, we say p and q are not equivalent(or distinguishable) iff there is a string x  * s.t. exactly one of  (p,x) and  (q,x) is a final state, »where  (p,x) is the ending state of the path from p with x as the input. l Equivalents sates can be merged to form a simpler machine.

86 86 0 1 24 3 a a a,b a b bb 5 0 5 1,23,4 a,b Example:

87 87 Quotient Construction M=(Q, , ,s,F): a DFA.  : a relation on Q defined by: p  q for all x   *  (p,x)  F iff  (q,x)  F Property:  is an equivalence relation. l Hence it partitions Q into equivalence classes [p] = {q  Q | p  q} for p  Q. and the quotient set Q/  = {[p] | p  Q}. Every p  Q belongs to exactly one class [p] and p  q iff [p]=[q]. Define the quotient machine M/  = where »Q’=Q/  ; s’=[s]; F’={[p] | p  F}; and  ’([p],a)=[  (p,a)] for all p  Q and a  .

88 88 Minimization algorithm l input: a DFA l output: a optimized DFA 1. Write down a table of all pairs {p,q}, initially unmarked. 2. mark {p,q} if p ∈ F and q  F or vice versa. 3. Repeat until no more change: 3.1 if ∃ unmarked pair {p,q} s.t. {move(p,a), move(q,a)} is marked for some a ∈ S, then mark {p,q}. 4. When done, p  q iff {p,q} is not marked. 5. merge all equivalent states into one class and return the resulting machine

89 89 An Example: l The DFA: ab >012 1F1F34 2F2F43 355 455 5F5F55

90 90 Initial Table 1- 2-- 3--- 4---- 5----- 01234

91 91 After step 2 1M 2M- 3-MM 4-MM- 5M--MM 01234

92 92 After first pass of step 3 1M 2M- 3-MM 4-MM- 5MMMMM 01234

93 93 2nd pass of step 3. The result : 1  2 and 3  4. 1M 2M- 3M2MM 4 MM- 5MM1 MM 01234


Download ppt "1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4."

Similar presentations


Ads by Google