Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intermediate Code CS 471 October 29, 2007. CS 471 – Fall 2007 1 Intermediate Code Generation Source code Lexical Analysis Syntactic Analysis Semantic.

Similar presentations


Presentation on theme: "Intermediate Code CS 471 October 29, 2007. CS 471 – Fall 2007 1 Intermediate Code Generation Source code Lexical Analysis Syntactic Analysis Semantic."— Presentation transcript:

1 Intermediate Code CS 471 October 29, 2007

2 CS 471 – Fall 2007 1 Intermediate Code Generation Source code Lexical Analysis Syntactic Analysis Semantic Analysis Intermediate Code Gen lexical errors syntax errors semantic errors tokens AST AST’ IR

3 CS 471 – Fall 2007 2 Motivation What we have so far... An abstract syntax tree –With all the program information –Known to be correct Well-typed Nothing missing No ambiguities What we need... Something “Executable” Closer to actual machine level of abstraction

4 CS 471 – Fall 2007 3 Why an Intermediate Representation? What is the IR used for? –Portability –Optimization –Component Interface –Program understanding Compiler –Front end does lexical analysis, parsing, semantic analysis, translation to IR –Back end does optimization of IR, translation to machine instructions Try to keep machine dependences out of IR for as long as possible

5 CS 471 – Fall 2007 4 Intermediate Code Abstract machine code – (Intermediate Representation) Allows machine-independent code generation, optimization ASTIR PowerPC Alpha x86 optimize

6 CS 471 – Fall 2007 5 What Makes a Good IR? Easy to translate from AST Easy to translate to assembly Narrow interface: small number of node types (instructions) –Easy to optimize –Easy to retarget AST (>40 node types) IR (13 node types) x86 (>200 opcodes)

7 CS 471 – Fall 2007 6 Intermediate Representations Any representation between the AST and ASM –3 address code: triples, quads (low-level) –Expression trees (high-level) Tiger intermediate code: a[i]; MEM + a BINOP MUL iCONST W

8 CS 471 – Fall 2007 7 Intermediate representation Components code representation symbol table analysis information string table Issues Use an existing IR or design a new one? –Many available: RTLs, SSA, LVM, etc How close should it be to the source/target?

9 CS 471 – Fall 2007 8 IR selection Using an existing IR cost savings due to reuse it must be expressive and appropriate for the compiler operations Designing an IR decide how close to machine code it should be decide how expressive it should be decide its structure consider combining different kinds of IRs

10 CS 471 – Fall 2007 9 The IR Machine A machine with Infinite number of temporaries (think registers) Simple instructions –3-operands –Branching –Calls with simple calling convention Simple code structure –Array of instructions Labels to define targets of branches

11 CS 471 – Fall 2007 10 Temporaries The machine has an infinite number of temporaries Call them t 0, t 1, t 2,.... Temporaries can hold values of any type The type of the temporary is derived from the generation Temporaries go out of scope with each function

12 CS 471 – Fall 2007 11 Optimizing Compilers Goal: get program closer to machine code without losing information needed to do useful optimizations Need multiple IR stages MIR PowerPC (LIR) Alpha (LIR) x86 (LIR) optimize ASTHIR optimize opt

13 CS 471 – Fall 2007 12 High-Level IR (HIR) used early in the process usually converted to lower form later on Preserves high-level language constructs –structured flow, variables, methods Allows high-level optimizations based on properties of source language (e.g. inlining, reuse of constant variables) Example: AST

14 CS 471 – Fall 2007 13 Medium-Level IR (MIR) Try to reflect the range of features in the source language in a language-independent way Intermediate between AST and assembly Unstructured jumps, registers, memory locations Convenient for translation to high-quality machine code OtherMIRs: –quadruples: a = b OP c (“a” is explicit, not arc) –UCODE: stack machine based (like Java bytecode) –advantage of tree IR: easy to generate, easier to do reasonable instruction selection –advantage of quadruples: easier optimization

15 CS 471 – Fall 2007 14 Low-Level IR (LIR) Assembly code + extra pseudo instructions Machine dependent Translation to assembly code is trivial Allows optimization of code for low-level considerations: scheduling, memory layout

16 CS 471 – Fall 2007 15 IR classification: Level for i := op1 to op2 step op3 instructions endfor i := op1 if step < 0 goto L2 L1: if i > op2 goto L3 instructions i := i + step goto L1 L2: if i < op2 goto L3 instructions i := i + step goto L2 L3: High-level Medium-level

17 CS 471 – Fall 2007 16 IR classification: Structure Graphical Trees, graphs Not easy to rearrange Large structures Linear Looks like pseudocode Easy to rearrange Hybrid Combine graphical and linear IRs Example: –low-level linear IR for basic blocks, and –graph to represent flow of control

18 CS 471 – Fall 2007 17 Graphical IRs Parse tree Abstract syntax tree High-level Useful for source-level information Retains syntactic structure Common uses –source-to-source translation –semantic analysis –syntax-directed editors

19 CS 471 – Fall 2007 18 Graphical IRs: Often Use Basic Blocks Basic block = a sequence of consecutive statements in which flow of control enters at the beginning and leaves at the end without halt or possibility of branching except at the end Partitioning a sequence of statements into BBs 1. Determine leaders (first statements of BBs) –The first statement is a leader –The target of a conditional is a leader –A statement following a branch is a leader 2. For each leader, its basic block consists of the leader and all the statements up to but not including the next leader.

20 CS 471 – Fall 2007 19 Basic blocks unsigned int fibonacci (unsigned int n) { unsigned int f0, f1, f2; f0 = 0; f1 = 1; if (n <= 1) return n; for (int i=2; i<=n; i++) { f2 = f0+f1; f0 = f1; f1 = f2; } return f2; } read(n) f0 := 0 f1 := 1 if n<=1 goto L0 i := 2 L2:if i<=n goto L1 return f2 L1:f2 := f0+f1 f0 := f1 f1 := f2 i := i+1 go to L2 L0:return n

21 CS 471 – Fall 2007 20 Basic blocks read(n) f0 := 0 f1 := 1 if n<=1 goto L0 i := 2 L2:if i<=n goto L1 return f2 L1:f2 := f0+f1 f0 := f1 f1 := f2 i := i+1 go to L2 L0:return n Leaders: read(n) f0 := 0 f1 := 1 n <= 1 i := 2 i<=n return f2f2 := f0+f1 f0 := f1 f1 := f2 i := i+1 return n exit entry

22 CS 471 – Fall 2007 21 Graphical IRs Tree, for basic block* root: operator up to two children: operands can be combined Uses: algebraic simplifications may generate locally optimal code. L1: i := 2 t1:= i+1 t2 := t1>0 if t2 goto L1 assgn, i add, t1 gt, t2 2 i 1 t1 0 2 assgn, i 1 add, t1 0 gt, t2 *straight-line code with no branches or branch targets.

23 CS 471 – Fall 2007 22 Graphical IRs Directed acyclic graphs (DAGs) Like compressed trees –leaves: variables, constants available on entry –internal nodes: operators annotated with variable names? –distinct left/right children Used for basic blocks (DAGs don't show control flow) Can generate efficient code. –Note: DAGs encode common expressions But difficult to transform Good for analysis

24 CS 471 – Fall 2007 23 Graphical IRs Generating DAGs Check whether an operand is already present –if not, create a leaf for it Check whether there is a parent of the operand that represents the same operation –if not create one, then label the node representing the result with the name of the destination variable, and remove that label from all other nodes in the DAG.

25 CS 471 – Fall 2007 24 Graphical IRs Directed acyclic graphs (DAGs) Example m := 2 * y * z n := 3 * y * z p := 2 * y - z

26 CS 471 – Fall 2007 25 Graphical IRs Control flow graphs (CFGs) Each node corresponds to a –basic block, or –part of a basic block, or may need to determine facts at specific points within BB –a single statement more space and time Each edge represents flow of control

27 CS 471 – Fall 2007 26 Graphical IRs Dependence graphs : they represents constraints on the sequencing of operations Dependence = a relation between two statements that puts a constraint on their execution order. –Control dependence Based on the program's control flow. –Data dependence Based on the flow of data between statements. Nodes represent statements Edges represent dependences –Labels on the edges represent types of dependences Built for specific optimizations, then discarded

28 CS 471 – Fall 2007 27 Graphical IRs Dependence graphs Example: s1a := b + c s2 if a>10 goto L1 s3d := b * e s4e := d + 1 s5 L1: d := e / 2 control dependence: s3 and s4 are executed only when a<=10 true or flow dependence: s2 uses a value defined in s1 This is read-after-write dependence antidependence: s4 defines a value used in s3 This is write-after-read dependence output dependence: s5 defines a value also defined in s3 This is write-after-write dependence input dependence: s5 uses a value also uses in s3 This is read-after-read situation. It places no constraints in the execution order, but is used in some optimizations. s1s2s3 s4 s5

29 CS 471 – Fall 2007 28 IR classification: Structure Graphical Trees, graphs Not easy to rearrange Large structures Linear Looks like pseudocode Easy to rearrange Hybrid Combine graphical and linear IRs Example: –low-level linear IR for basic blocks, and –graph to represent flow of control

30 CS 471 – Fall 2007 29 Linear IRs Sequence of instructions that execute in order of appearance Control flow is represented by conditional branches and jumps Common representations stack machine code three-address code

31 CS 471 – Fall 2007 30 Linear IRs Stack machine code Assumes presence of operand stack Useful for stack architectures, JVM Operations typically pop operands and push results. Advantages –Easy code generation –Compact form Disadvantages –Difficult to rearrange –Difficult to reuse expressions

32 CS 471 – Fall 2007 31 Example: Three-Address Code Each instruction is of the form x := y op z y and z can be only registers or constants Just like assembly Common form of intermediate code The AST expression x + y * z is translated as t 1 := y * z t 2 := x + t 1 Each subexpression has a “home”

33 CS 471 – Fall 2007 32 Three Address Code Result, operand, operand, operator –x := y op z, where op is a binary operator and x, y, z can be variables, constants or compiler generated temporaries (intermediate results) Can write this as a shorthand – -- quadruples Statements –Assignment x := y op z –Copy stmts x := y –Goto L

34 CS 471 – Fall 2007 33 Three Address Code Statements (cont) –if x relop y goto L –Indexed assignments x := y[j] or s[j] := z –Address and pointer assignments (for C) x := &y x := *p *x := y –Parm x; –call p, n; –return y;

35 CS 471 – Fall 2007 34 Bigger Example Consider the AST t 1 := - c t 2 := b * t 1 t 3 := - c t 4 := b * t 3 t 5 := t 2 + t 4 a := t 5 t 1 := - c t 2 := b * t 1 t 3 := - c t 4 := b * t 3 t 5 := t 2 + t 4 a := t 5

36 CS 471 – Fall 2007 35 Summary IRs provide the interface between the front and back ends of the compiler Should be machine and language independent Should be amenable to optimization Next Time: Tiger IR


Download ppt "Intermediate Code CS 471 October 29, 2007. CS 471 – Fall 2007 1 Intermediate Code Generation Source code Lexical Analysis Syntactic Analysis Semantic."

Similar presentations


Ads by Google