Code Generation.

Slides:



Advertisements
Similar presentations
Chapter 1 The Study of Body Function Image PowerPoint
Advertisements

1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 3 CPUs.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
Year 6 mental test 5 second questions
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 4: Linked Lists Data Abstraction & Problem Solving with.
ABC Technology Project
Semantic Analysis and Symbol Tables
Review Pseudo Code Basic elements of Pseudo code
1 Code Generation The target machine Instruction selection and register allocation Basic blocks and flow graphs A simple code generator Peephole optimization.
Intermediate Code Generation. 2 Intermediate languages Runtime environments Declarations Expressions Statements.
8 VM code generation Aspects of code generation Address allocation
Intermediate Representations CS 671 February 12, 2008.
Procedures. 2 Procedure Definition A procedure is a mechanism for abstracting a group of related operations into a single operation that can be used repeatedly.
 2000 Prentice Hall, Inc. All rights reserved. Chapter 14 - Advanced C Topics Outline 14.1Introduction 14.2Redirecting Input/Output on UNIX and DOS Systems.
25 seconds left…...
Week 1.
We will resume in: 25 Minutes.
Techniques for proving programs with pointers A. Tikhomirov.
Data Structures Using C++ 2E
Compiler Construction
1 Programming Languages (CS 550) Mini Language Interpreter Jeremy R. Johnson.
Chapter 6 Intermediate Code Generation
Compiler Construction Sohail Aslam Lecture Code Generation  The code generation problem is the task of mapping intermediate code to machine code.
THUMB Instructions: Branching and Data Processing
Target code Generation Made by – Siddharth Rakesh 11CS30036 Date – 12/11/2013.
Based on Mike Feeley’s and Tamara Munzner’s original slides; Modified by George Tsiknis Parameters and Local Variables Relevant Information CPSC 213 Companion.
Intermediate Code Generation
CSI 3120, Implementing subprograms, page 1 Implementing subprograms The environment in block-structured languages The structure of the activation stack.
1 Compiler Construction Intermediate Code Generation.
1 Chapter 7: Runtime Environments. int * larger (int a, int b) { if (a > b) return &a; //wrong else return &b; //wrong } int * larger (int *a, int *b)
Runtime Environments Source language issues Storage organization
Run-Time Storage Organization
1 Pertemuan 20 Run-Time Environment Matakuliah: T0174 / Teknik Kompilasi Tahun: 2005 Versi: 1/6.
Run time vs. Compile time
Chapter 9: Subprogram Control
1 Run time vs. Compile time The compiler must generate code to handle issues that arise at run time Representation of various data types Procedure linkage.
Presented by Dr Ioanna Dionysiou
7/13/20151 Topic 3: Run-Time Environment Memory Model Activation Record Call Convention Storage Allocation Runtime Stack and Heap Garbage Collection.
Chapter 7: Runtime Environment –Run time memory organization. We need to use memory to store: –code –static data (global variables) –dynamic data objects.
Compiler Construction
Chapter 10 Implementing Subprograms. Copyright © 2012 Addison-Wesley. All rights reserved.1-2 Chapter 10 Topics The General Semantics of Calls and Returns.
Chapter 7 Runtime Environments. Relationships between names and data objects As execution proceeds, the same name can denote different data objects Procedures,
1 Run-Time Environments. 2 Procedure Activation and Lifetime A procedure is activated when called The lifetime of an activation of a procedure is the.
Basic Semantics Associating meaning with language entities.
ISBN Chapter 10 Implementing Subprograms.
BİL 744 Derleyici Gerçekleştirimi (Compiler Design)1 Run-Time Environments How do we allocate the space for the generated target code and the data object.
RUN-Time Organization Compiler phase— Before writing a code generator, we must decide how to marshal the resources of the target machine (instructions,
國立台灣大學 資訊工程學系 薛智文 98 Spring Run Time Environments (textbook ch# 7.1–7.3 )
Compilers Modern Compiler Design
CSC 8505 Compiler Construction Runtime Environments.
7. Runtime Environments Zhang Zhizheng
1 Run-Time Environments Chapter 7 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
10-1 Chapter 10: Implementing Subprograms The General Semantics of Calls and Returns Implementing “Simple” Subprograms Implementing Subprograms with Stack-Dynamic.
Implementing Subprograms
1 Compiler Construction Run-time Environments,. 2 Run-Time Environments (Chapter 7) Continued: Access to No-local Names.
1 Chapter10: Code generator. 2 Code Generator Source Program Target Program Semantic Analyzer Intermediate Code Generator Code Optimizer Code Generator.
Run-Time Environments Presented By: Seema Gupta 09MCA102.
Code Generation Instruction Selection Higher level instruction -> Low level instruction Register Allocation Which register to assign to hold which items?
Run-Time Environments Chapter 7
Subject Name Compiler Design Subject Code: 10CS63
Run-Time Storage Organization
Run-Time Storage Organization
Run-Time Environments
Run-Time Environments
Three-address code A more common representation is THREE-ADDRESS CODE . Three address code is close to assembly language, making machine code generation.
UNIT V Run Time Environments.
Activation records Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Presentation transcript:

Code Generation

Code Generation The target machine Runtime environment Basic blocks and flow graphs Instruction selection Instruction selector generator Register allocation Peephole optimization

The Target Machine A byte addressable machine with four bytes to a word and n general purpose registers Two address instructions op source, destination Six addressing modes absolute M M 1 register R R 0 indexed c(R) c+content(R) 1 ind register *R content(R) 0 ind indexed *c(R) content(c+content(R)) 1 literal #c c 1

Compiler Construction Examples MOV R0, M MOV 4 (R0), M MOV *R0, M MOV *4 (R0), M MOV #1, R0 Code Generation

Instruction Costs Cost of an instruction = 1 + costs of source and destination addressing modes This cost corresponds to the length (in words) of the instruction Minimize instruction length also tend to minimize the instruction execution time

Examples MOV R0, R1 1 MOV R0, M 2 MOV #1, R0 2 MOV 4 (R0), *12 (R1) 3

An Example Consider a := b + c 1. MOV b, R0 2. MOV b, a ADD c, R0 ADD c, a MOV R0, a 3. R0, R1, R2 contains 4. R1, R2 contains the addresses of a, b, c the values of b, c MOV *R1, *R0 ADD R2, R1 ADD *R2, *R0 MOV R1, a

Instruction Selection Code skeleton x := y + z a := b + c d := a + e MOV y, R0 MOV b, R0 MOV a, R0 ADD z, R0 ADD c, R0 ADD e, R0 MOV R0, x MOV R0, a MOV R0, d Multiple choices a := a + 1 MOV a, R0 INC a ADD #1, R0 MOV R0, a

Register Allocation Register allocation: select the set of variables that will reside in registers Register assignment: pick the specific register that a variable will reside in The problem is NP-complete

An Example t := a + b t := a + b t := t * c t := t + c t := t / d t := t / d MOV a, R1 MOV a, R0 ADD b, R1 ADD b, R0 MUL c, R0 ADD c, R0 DIV d, R0 SRDA R0, 32 MOV R1, t DIV d, R0 MOV R1, t

Runtime Environments A translation needs to relate the static source text of a program to the dynamic actions that must occur at runtime to implement the program Essentially, the relationship between names and data objects The runtime support system consists of routines that manage the allocation and deallocation of data objects

Activations A procedure definition associates an identifier (name) with a statement (body) Each execution of a procedure body is an activation of the procedure An activation tree depicts the way control enters and leaves activations

An Example program sort (input, output); var a: array [0..10] of integer; procedure readarray; var i: integer; begin for i := 1 to 9 do read(a[i]) end; procedure partition(y, z: integer): integer; var i, j, x, v: integer; begin … end; procedure quicksort(m, n: integer); begin if (n > m) then begin I := partition(m, n); quicksort(m, I-1); quicksort (I+1, n) end end; begin a[0] := -9999; a[10] := 9999; readarray; quicksort(1,9) end.

An Example s r q(1,9) p(1,9) q(1,3) q(5,9) q(1,0) q(5,5) q(7,9) q(2,3)

Scope A declaration associates information with a name Scope rules determine which declaration of a name applies The portion of the program to which a declaration applies is called the scope of that declaration

Bindings of Names The same name may denote different data objects (storage locations) at runtime An environment is a function that maps a name to a storage location A state is a function that maps a storage location to the value held there environment state name storage location value

Static and Dynamic Notions

Storage Organization Target code: static Static data objects: static Dynamic data objects: heap Automatic data objects: stack static data stack heap

Activation Records returned value actual parameters stack optional control link optional access link machine status local data temporary data

Activation Records returned value and parameters links and machine status local and temporary data returned value and parameters links and machine status frame pointer local and temporary data stack pointer

Declarations P  {offset := 0} D D  D “;” D D  id “:” T {enter(id.name, T.type, offset); offset := offset + T.width} T  integer {T.type := integer; T.width := 4} T  float {T.type := float; T.width := 8} T  array “[” num “]” of T1 {T.type := array(num.val, T1.type); T.width := num.val  T1.width} T  “*” T1 {T.type := pointer(T1.type); T.width := 4}

Nested Procedures P  D D  D “;” D | id “:” T | proc id “;” D “;” S header nil header i a header x readarray header header exchange k i quicksort v j partition

Symbol Table Handling Operations Stacks mktable(previous): creates a new table and returns a pointer to the table enter(table, name, type, offset): creates a new entry for name in the table addwidth(table, width): records the cumulative width of entries in the header enterproc(table, name, newtable): creates a new entry for procedure name in the table Stacks tblptr: pointers to symbol tables offset : the next available relative address

Declarations P  M D {addwidth(top(tblptr), top(offset)); pop(tblptr); pop(offset)} M   {t := mktable(nil); push(t, tblptr); push(0, offset)} D  D “;” D D  proc id “;” N D “;” S {t := top(tblptr); addwidth(t, top(offset)); pop(tblptr); pop(offset); enterproc(top(tblptr), id.name, t)} D  id “:” T {enter(top(tblptr), id.name, T.type, top(offset)); top(offset) := top(offset) + T.width} N   {t := mktable(top(tblptr)); push(t, tblptr); push(0, offset)}

Records T  record D end T  record L D end {T.type := record(top(tblptr)); T.width := top(offset); pop(tblptr); pop(offset)} L   {t := mktable(nil); push(t, tblptr); push(0, offset)}

Basic Blocks A basic block is a sequence of consecutive statements in which control enters at the beginning and leaves at the end without halt or possibility of branching except at the end

An Example (1) prod := 0 (2) i := 1 (3) t1 := 4 * i (4) t2 := a[t1] (6) t4 := b[t3] (7) t5 := t2 * t4 (8) t6 := prod + t5 (9) prod := t6 (10) t7 := i + 1 (11) i := t7 (12) if i <= 20 goto (3)

Control Flow Graphs A (control) flow graph is a directed graph The nodes in the graph are basic blocks There is an edge from B1 to B2 iff B2 immediately follows B1 in some execution sequence there is a jump from B1 to B2 B2 immediately follows B1 in program text B1 is a predecessor of B2, B2 is a successor of B1

An Example (1) prod := 0 B0 (2) i := 1 (3) t1 := 4 * i (4) t2 := a[t1] (6) t4 := b[t3] (7) t5 := t2 * t4 (8) t6 := prod + t5 (9) prod := t6 (10) t7 := i + 1 (11) i := t7 (12) if i <= 20 goto (3) B0 B1

Construction of Basic Blocks Determine the set of leaders the first statement is a leader the target of a jump is a leader any statement immediately following a jump is a leader For each leader, its basic block consists of the leader and all statements up to but not including the next leader or the end of the program

Representation of Basic Blocks Each basic block is represented by a record consisting of a count of the number of statements a pointer to the leader a list of predecessors a list of successors

DAG Representation of Blocks Easy to determine: common subexpressions names used in the block but evaluated outside the block names whose values could be used outside the block

DAG Representation of Blocks Leaves labeled by unique identifiers Interior nodes labeled by operator symbols Interior nodes optionally given a sequence of identifiers, having the value represented by the nodes

An Example (1) t1 := 4 * i (2) t2 := a[t1] (3) t3 := 4 * i (4) t4 := b[t3] (5) t5 := t2 * t4 (6) t6 := prod + t5 (7) prod := t6 (8) t7 := i + 1 (9) i := t7 (10) if i <= 20 goto (1) i0 4 1 <= * [] + b a prod0 20 t1,t3 t4 t2 t5 t6, prod (1) t7, i

Constructing a DAG Consider x := y op z. Other statements can be handled similarly If node(y) is undefined, create a leaf labeled y and let node(y) be this leaf. If node(z) is undefined, create a leaf labeled z and let node(z) be that leaf

Constructing a DAG Determine if there is a node labeled op, whose left child is node(y) and its right child is node(z). If not, create such a node. Let n be the node found or created. Delete x from the list of attached identifiers for node(x). Append x to the list of attached identifiers for the node n and set node(x) to n

Reconstructing Quadruples Evaluate the interior nodes in topological order Assign the evaluated value to one of its attached identifier x, preferring one whose value is needed outside the block If there is no attached identifier, create a new temp to hold the value If there are additional attached identifiers y1, y2, …, yk whose values are also needed outside the block, add y1 := x, y2 := x, …, yk := x

An Example (1) t1 := 4 * i (2) t2 := a[t1] (3) t3 := b[t1] prod (1) t1 := 4 * i (2) t2 := a[t1] (3) t3 := b[t1] (4) t4 := t2 * t3 (5) prod := prod + t4 (6) i := i + 1 (7) if i <= 20 goto (1) + prod0 * (1) [] [] <= i a b 20 * + 4 i0 1

Generating Code From DAGs t1 := a + b t2 := c + d t3 := e - t2 t4 := t1 - t3 (1) MOV a, R0 (2) ADD b, R0 (3) MOV c, R1 (4) ADD d, R1 (5) MOV R0, t1 (6) MOV e, R0 (7) SUB R1, R0 (8) MOV t1, R1 (9) SUB R0, R1 (10) MOV R1, t4 - + a0 b0 e0 c0 d0 t1 t2 t3 t4 Only R0 and R1 available

Rearranging the Order t2 := c + d t3 := e - t2 t1 := a + b t4 := t1 - t3 (1) MOV c, R0 (2) ADD d, R0 (3) MOV e, R1 (4) SUB R0, R1 (5) MOV a, R0 (6) ADD b, R0 (7) SUB R1, R0 (8) MOV R0, t4 - + a0 b0 e0 c0 d0 t1 t2 t3 t4

A Heuristic Ordering for DAG Attempt as far as possible to make the evaluation of a node immediately follow the evaluation of its left most argument

Node Listing Algorithm while unlisted interior nodes remain do begin select an unlisted node n, all of whose parents have been listed; list n; while the leftmost child m of n has no unlisted parents and is not a leaf do begin list m; n := m; end

An Example t7 := d + e t6 := a + b t5 := t6 - c t4 := t5 * t7 3 2 1 - + * a0 b0 c0 d0 e0 6 4 5 7 t7 := d + e t6 := a + b t5 := t6 - c t4 := t5 * t7 t3 := t4 - e t2 := t6 + t4 t1 := t2 * t3

Generating Code From Trees There exists an algorithm that determines the optimal order in which to evaluate statements in a block when the dag representation of the block is a tree Optimal order here means the order that yields the shortest instruction sequence

Optimal Ordering for Trees Label each node of the tree bottom-up with an integer denoting fewest number of registers required to evaluate the tree with no stores of immediate results Generate code during a tree traversal by first evaluating the operand requiring more registers

The Labeling Algorithm if n is a leaf then if n is the leftmost child of its parent then label(n) := 1 else label(n) := 0 else begin let n1, n2, …, nk be the children of n ordered by label so that label(n1)  label(n2)  …  label(nk); label(n) := max1 i  k(label(ni) + i - 1) end

An Example For binary interior nodes: max(l1, l2), if l1  l2 label(n) = max(l1, l2), if l1  l2 l1 + 1, if l1 = l2 t1 t4 t2 a b c t3 d e 1 2

Code Generation From a Labeled Tree Use a stack rstack to allocate registers R0, R1, …, R(r-1) The value of a tree is always computed in the top register on rstack The function swap(rstack) interchanges the top two registers on rstack Use a stack tstack to allocate temporary memory locations T0, T1, ...

Cases Analysis op n1 n2 n name name op n1 n2 label(n1) < label(n2) both labels  r

The Function gencode procedure gencode(n); begin if n is a left leaf representing operand name and n is the leftmost child of its parent then print 'MOV' || name || ',' || top(rstack) else if n is an interior node with operator op, left child n1, and right child n2 then if label(n2) = 0 then /* case 1 */ else if 1 label(n1) < label(n2) and label(n1) < r then /* case 2 */ else if 1 label(n2)  label(n1) and label(n2) < r then /* case 3 */ else /* case 4, both labels  r */ end

The Function gencode /* case 1 */ begin let name be the operand represented by n2; gencode(n1); print op || name || ',' || top(rstack) end /* case 2 */ swap(rstack); gencode(n2); R := pop(rstack); gencode(n1); print op || R || ',' || top(rstack); push(rstack, R); swap(rstack);

The Function gencode /* case 3 */ begin gencode(n1); R := pop(rstack); gencode(n2); print op || R || ',' || top(rstack); push(rstack, R); end /* case 4 */ gencode(n2); T := pop(tstack); print 'MOV' || top(rstack) || ',' || T; gencode(n1); push(tstack, T); print op || T || ',' || top(rstack);

An Example gencode(t4) [R1, R0] /* 2 */ gencode(t3) [R0, R1] /* 3 */ gencode(e) [R0, R1] /* 0 */ print MOV e, R1 gencode(t2) [R0] /* 1 */ gencode(c) [R0] /* 0 */ print MOV c, R0 print ADD d, R0 print SUB R0, R1 gencode(t1) [R0] /* 1 */ gencode(a) [R0] /* 0 */ print MOV a, R0 print ADD b, R0 print SUB R1, R0 t1 t4 t2 a b c t3 d e 1 2 - + - +

Common Subexpressions Nodes with more than one parent in a dag are called shared nodes Optimal code generation for dags on both a one-register machine or an unlimited number of registers machine are NP-complete

Partitioning a DAG into Trees Partition a dag into a set of trees by finding for each root and shared node n, the maximal subtree with n as root that includes no other shared nodes, except as leaves Determine a code generation ordering for the trees Generate code for each tree using the algorithms for generating code from trees

An Example 1 * 2 3 1 + - * 6 4 4 2 3 e0 + - + * * 4 4 * * 5 7 5 7 - + d0 e0 - + 6 6 c0 c0 d0 e0 + + a0 b0 6 + e0 a0 b0

Dynamic Programming Code Generation The dynamic programming algorithm applies to a broad class of register machines with complex instruction sets Machines has r interchangeable registers Machines has instructions of the form Ri = E where E is any expression containing operators, registers, and memory locations. If E involves registers, then Ri must be one of them

Dynamic Programming The dynamic programming algorithm partitions the problem of generating optimal code for an expression into sub-problems of generating optimal code for the sub-expressions of the given expression + T1 T2

Contiguous Evaluation We say a program P evaluates a tree T contiguously if it first evaluates those subtrees of T that need to be computed into memory it then evaluates the subtrees of the root in either order it finally evaluates the root

Optimally Contiguous Program For the machines defined above, given any program P to evaluate an expression tree T, we can find an equivalent program P' such that P' is of no higher cost than P P' uses no more registers than P P' evaluates the tree in a contiguous fashion This implies that every expression tree can be evaluated optimally by a contiguous program

Dynamic Programming Algorithm Phase 1: compute bottom-up for each node n of the expression tree T an array C of costs, in which the ith component C[i] is the optimal cost of computing the subtree S rooted at n into a register, assuming i registers are available for the computation. C[0] is the optimal cost of computing the subtree S into memory

Dynamic Programming Algorithm To compute C[i] at node n, consider each machine instruction R := E whose expression E matches the subexpression rooted at node n Determine the costs of evaluating the operands of E by examining the cost vectors at the corresponding descendants of n

Dynamic Programming Algorithm For those operands of E that are registers, consider all possible orders in which the corresponding subtrees of T can be evaluated into registers In each ordering, the first subtree corresponding to a register operand can be evaluated using i available registers, the second using i-1 registers, and so on

Dynamic Programming Algorithm For node n, add in the cost of the instruction R := E that was used to match node n The value C[i] is then the minimum cost over all possible orders At each node, store the instruction used to achieve the best cost for C[i] for each i The smallest cost in the vector gives the minimum cost of evaluating T

Dynamic Programming Algorithm Phase 2: traverse T and use the cost vectors to determine which subtrees of T must be computed into memory Phase 3: traverse T and use the cost vectors and associated instructions to generate the final target code

An Example Consider a machine with two registers R0 and R1 and instructions Ri := Mj Mi := Ri Ri := Rj Ri := Ri op Rj Ri := Ri op Mj - + (0, 1, 1) (8, 8, 7) (3, 2, 2) a b / * (5, 5, 4) e c d

An Example R0 := c R1 := d R1 := R1 / e R0 := R0 * R1 R1 := a - + (0, 1, 1) (8, 8, 7) (3, 2, 2) a b / * (5, 5, 4) c d e R0 := c R1 := d R1 := R1 / e R0 := R0 * R1 R1 := a R1 := R1 - b R1 := R1 + R0

Code Generator Generators A tool to automatically construct the instruction selection phrase of a code generator Such tools may use tree grammars or context free grammars to describe the target machines Register allocation will be implemented as a separate mechanism

Tree Rewriting := a[i] := b + 1 ind + + memb const1 + ind consta regsp

Tree Rewriting The code is generated by reducing the input tree into a single node using a sequence of tree-rewriting rules Each tree rewriting rule is of the form replacement  template { action } replacement is a single node template is a tree action is a code fragment A set of tree-rewriting rules is called a tree-translation scheme

An Example + regi { ADD Rj, Ri } regi regj  { ADD Rj, Ri } Each tree template represents a computation performed by the sequence of machines instructions emitted by the associated action

Tree Rewriting Rules (1) regi  constc { MOV #c, Ri } (2) regi  mema { MOV a, Ri } (3) := mema regi mem  { MOV Ri, a } (4) ind regj { MOV Rj, *Ri } + constc regi  (5) { MOV c(Rj), Ri }

Tree Rewriting Rules + ind regj regi constc regi  { ADD c(Rj), Ri } (6) + { ADD Rj, Ri } regi  (7) regi regj + { INC Ri } regi  (8) regi const1

An Example := ind + + memb const1 + ind consta regsp + consti regsp (1) { MOV #a, R0 }

An Example := ind + + memb const1 + ind reg0 regsp + consti regsp (7) { ADD SP, R0 }

An Example := ind + + memb const1 reg0 ind + consti regsp { ADD i (SP), R0 } + memb const1 reg0 ind + { MOV i (SP), R1 } (5) consti regsp (6)

An Example := ind + reg0 memb const1 (2) { MOV b, R1 }

An Example := ind + reg0 reg1 const1 (8) { INC R1 }

An Example := ind reg1 reg0 (4) { MOV R1, *R0 }

Tree Pattern Matching The tree pattern matching algorithm can be implemented by extending the multiple-keyword pattern matching algorithm Each tree template is represented by a set of strings, each of which represents a path from the root to a leave Each rule is associated with cost information The dynamic programming algorithm can be used to select an optimal sequence of matches

Semantic Predicates + { if c = 1 then INC Ri else ADD #c, Ri } regi  regi constc The general use of semantic actions and predicates can provide greater flexibility and ease of description than a purely grammatical specification

Graph Coloring In the first pass, target machine instructions are selected as though there were an infinite number of symbolic registers In the second pass, physical registers are assigned to symbolic registers using graph coloring algorithms During the second pass, if a register is needed when all available registers are used, some of the used registers must be spilled

Interference Graph For each procedure, a register-interference graph is constructed The nodes in the graph are symbolic registers An edge connects two nodes if one is live at a point where the other is defined

K-Colorable Graphs A graph is said to be k-colorable if each node can be assigned one of the k colors such that no two adjacent nodes have the same color A color represents a register The problem of determining whether a graph is k-colorable is NP-complete

A Graph Coloring Algorithm Remove a node n and its edges if it has fewer than k neighbors Repeat the removing step above until we end up with the empty graph or a graph in which each node has k or more adjacent nodes In the latter case, a node is selected and spilled by deleting that node and its edges, and the removing step above continues

A Graph Coloring Algorithm The nodes in the graph can be colored in the reverse order in which they are removed Each node can be assigned a color not assigned to any of its neighbors Spilled nodes can be assigned any color

An Example 1 3 4 2 5 3 4 2 5 3 4 5 4 5 5

An Example G B R B G R B G R G R R

Peephole Optimization Improve the performance of the target program by examining and transforming a short sequence of target instructions May need repeated passes over the code Can also be applied directly after intermediate code generation

Examples Redundant loads and stores MOV R0, a MOV a, Ro Algebraic Simplification x := x + 0 x := x * 1 Constant folding x := 2 + 3 x := 5 y := x + 3 y := 8

Examples Unreachable code #define debug 0 if (debug) (print debugging information) if 0 <> 1 goto L1 print debugging information L1: if 1 goto L1 print debugging information L1:

Examples Flow-of-control optimization goto L1 goto L2 … … L1: goto L2 L2: goto L2 goto L1 if a < b goto L2 … goto L3 L1: if a < b goto L2 … L3: L3:

Examples Reduction in strength: replace expensive operations by cheaper ones x2  x * x fixed-point multiplication and division by a power of 2  shift floating-point division by a constant  floating-point multiplication by a constant

Examples Use of machine Idioms: hardware instructions for certain specific operations auto-increment and auto-decrement addressing mode (push or pop stack in parameter passing)