Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.

Slides:



Advertisements
Similar presentations
Example of Constructing the DAG (1)t 1 := 4 * iStep (1):create node 4 and i 0 Step (2):create node Step (3):attach identifier t 1 (2)t 2 := a[t 1 ]Step.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) SSA Guo, Yao.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
19 Classic Examples of Local and Global Code Optimizations Local Constant folding Constant combining Strength reduction.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
SSA.
Data-Flow Analysis Framework Domain – What kind of solution is the analysis looking for? Ex. Variables have not yet been defined – Algorithm assigns a.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
C Chuen-Liang Chen, NTUCS&IE / 321 OPTIMIZATION Chuen-Liang Chen Department of Computer Science and Information Engineering National Taiwan University.
1 Code Optimization. 2 The Code Optimizer Control flow analysis: control flow graph Data-flow analysis Transformations Front end Code generator Code optimizer.
1 Introduction to Data Flow Analysis. 2 Data Flow Analysis Construct representations for the structure of flow-of-data of programs based on the structure.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Dataflow Analysis Introduction Guo, Yao Part of the slides are adapted from.
1 CS 201 Compiler Construction Lecture 7 Code Optimizations: Partial Redundancy Elimination.
Partial Redundancy Elimination. Partial-Redundancy Elimination Minimize the number of expression evaluations By moving around the places where an expression.
1 Data flow analysis Goal : collect information about how a procedure manipulates its data This information is used in various optimizations For example,
Example in SSA X := Y op Z in out F X := Y op Z (in) = in [ { X ! Y op Z } X :=  (Y,Z) in 0 out F X :=   (in 0, in 1 ) = (in 0 Å in 1 ) [ { X ! E |
6/9/2015© Hal Perkins & UW CSEU-1 CSE P 501 – Compilers SSA Hal Perkins Winter 2008.
1 Copy Propagation What does it mean? Given an assignment x = y, replace later uses of x with uses of y, provided there are no intervening assignments.
1 CS 201 Compiler Construction Lecture 5 Code Optimizations: Copy Propagation & Elimination.
Recap from last time We were trying to do Common Subexpression Elimination Compute expressions that are available at each program point.
More Dataflow Analysis CS153: Compilers Greg Morrisett.
1 Data flow analysis Goal : –collect information about how a procedure manipulates its data This information is used in various optimizations –For example,
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.
Another example p := &x; *p := 5 y := x + 1;. Another example p := &x; *p := 5 y := x + 1; x := 5; *p := 3 y := x + 1; ???
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
Data Flow Analysis Compiler Design October 5, 2004 These slides live on the Web. I obtained them from Jeff Foster and he said that he obtained.
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
Class canceled next Tuesday. Recap: Components of IR Control dependencies: sequencing of operations –evaluation of if & then –side-effects of statements.
1 CS 201 Compiler Construction Lecture 6 Code Optimizations: Constant Propagation & Folding.
1 Copy Propagation What does it mean? – Given an assignment x = y, replace later uses of x with uses of y, provided there are no intervening assignments.
Common Sub-expression Elim Want to compute when an expression is available in a var Domain:
Improving Code Generation Honors Compilers April 16 th 2002.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
PSUCS322 HM 1 Languages and Compiler Design II IR Code Optimization Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU.
1 CS 201 Compiler Construction Data Flow Analysis.
CSE P501 – Compiler Construction
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
What’s in an optimizing compiler?
Dataflow Analysis Topic today Data flow analysis: Section 3 of Representation and Analysis Paper (Section 3) NOTE we finished through slide 30 on Friday.
MIT Introduction to Program Analysis and Optimization Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Jeffrey D. Ullman Stanford University. 2 boolean x = true; while (x) {... // no change to x }  Doesn’t terminate.  Proof: only assignment to x is at.
1 Data Flow Analysis Data flow analysis is used to collect information about the flow of data values across basic blocks. Dominator analysis collected.
More on Loop Optimization Data Flow Analysis CS 480.
1 Control Flow Graphs. 2 Optimizations Code transformations to improve program –Mainly: improve execution time –Also: reduce program size Can be done.
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
CS 598 Scripting Languages Design and Implementation 9. Constant propagation and Type Inference.
Code Optimization Data Flow Analysis. Data Flow Analysis (DFA)  General framework  Can be used for various optimization goals  Some terms  Basic block.
CS 412/413 Spring 2005Introduction to Compilers1 CS412/CS413 Introduction to Compilers Tim Teitelbaum Lecture 30: Loop Optimizations and Pointer Analysis.
Code Optimization More Optimization Techniques. More Optimization Techniques  Loop optimization  Code motion  Strength reduction for induction variables.
Data Flow Analysis Suman Jana
Topic 10: Dataflow Analysis
Factored Use-Def Chains and Static Single Assignment Forms
Code Generation Part III
University Of Virginia
Code Generation Part III
EECS 583 – Class 8 Classic Optimization
Fall Compiler Principles Lecture 10: Loop Optimizations
Data Flow Analysis Compiler Design
Topic-4a Dataflow Analysis 2019/2/22 \course\cpeg421-08s\Topic4-a.ppt.
Static Single Assignment
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Code Optimization.
Presentation transcript:

Global optimization

Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition information, various optimizing transformations can be performed: Common subexpression elimination Loop-invariant code motion Constant folding Reduction in strength Dead code elimination Basic tool: iterative algorithms over graphs

The flow graph Nodes are basic blocks Edges are transfers (conditional/unconditional jumps) For every node B (basic block) we define the sets Pred (B) and succ (B) which describe the graph Within a basic block we can easily (in a single pass) compute local information, typically a set Variables that are assigned a value : def (B) Variablesthat are operands: use (B) Global information reaching B is computed from the information on all Pred (B) (forward propagation) or Succ (B) (backwards propagation)

Example: live variable analysis Definition: a variable is alive at a location if its current value is used subsequently in the computation Use: if a variable is not alive on exit from a block, it does not need to be saved (stored in memory) Livein (B) and Liveout (B) are the sets of variables live on entry to or exit from block B. Liveout (B) =  livein (n) over all n ε succ (B) A variable is live on exit from B if it is live in any successor of B Livein (B) = liveout (B)  use (B) – defs b (B) A variable is live on entrance if it is live on exit or used within B. We exclude variables which are defined before they are used. Liveout (B exit ) =  On exit nothing is live

Liveness conditions z :=....x * 3..y+1..z -2 y, z live x, y live

Example: reaching definitions Definition: the set of computations (quadruples) that may be used at a point Use: compute use-definition relations. In (B) =  out (p) for all p ε pred (B) A computation reaches the entrance to a block if it reaches the exit of a predecessor Out (B) = in (B) + gen (B) – kill (B) A computation reaches the exit if it is reaches the entrance and is not recomputed in the block, or if it is computed locally In (B entry ) =  Nothing reaches the entry to the program

Iterative solution Note that the equations are monotonic: if out (B) increases, in (B’) increases for some successor. General approach: start from lower bound, iterate until nothing changes. Initially in (b) =  for all b, out (b) = gen (b) – kill a (b) change := true; while change loop change := false; forall b ε blocks loop in (b) =  out (p), forall p ε pred (b); oldout := out (b); out (b) := in (b)  gen (b) – kill a (b); if oldout /= out (b) then change := true; end if; end loop;

Workpile algorithm Instead of recomputing all blocks, keep a queue of nodes that may have changed. Iterate until queue is empty: while not empty (queue) loop dequeue (b); recompute (b); if b has changed, enqueue all its successors; end loop; Better algorithms use node orderings.

Example: available expressions Definition: computation (triple, e.g. x+y) that may be available at a point because previously computed Use: common subexpression elimination Local information: exp_gen (b) is set of expressions computed in b exp_kill (b) is the set of expressions whose operands are evaluated in b in (b) = ∩ out(p) for all p ε pred (b) Computation is available on entry if it is available on exit from all predecessors out (b) = exp_gen (b)  in (b) – exp_kill (b)

Iterative solution Note that the equations are monotonic: if out (B) increases, in (B’) does not decrease for any successor. Approach as above: start from lower bound, iterate until nothing changes. Initially in (b) =  for all b, out (b) = exp_gen (b) – exp_kill a (b) change := true; while change loop change := false; forall b ε blocks loop in (b) = ∩ out (p), forall p ε pred (b); oldout := out (b); out (b) := in (b)  exp_gen (b) – exp_kill a (b); if oldout /= out (b) then change := true; end if; end loop;

Use-definition chaining The closure of available expressions: map each occurrence (operand in a quadruple) to the quadruple that may have generated the value. ud (o): set of quadruples that may have computed the value of o Inverse map: du (q) : set of occurrences that may use the value computed at q.

finding loops in flow-graph A node n1 dominates n2 if all execution paths that reach n2 go through n1 first. The entry point of the program dominates all nodes in the program The entry to a loop dominates all nodes in the loop A loop is identified by the presence of a (back) edge from a node n to a dominator of n Data-flow equation: dom (b) = ∩ dom (p) forall p ε pred(b) a dominator of a node dominates all its predecessors

Loop optimization A computation (x op y) is invariant within a loop if x and y are constant ud (x) and ud (y) are all outside the loop There is one computation of x and y within the loop, and that computation is invariant A quadruple Q that is loop invariant can be moved to the pre-header of the loop iff: Q dominates all exits from the loop Q is the only assignment to the target variable in the loop There is no use of the target variable that has another definition. An exception may now be raised before the loop

Strength reduction Specialized loop optimization: formal differentiation An induction variable in a loop takes values that form an arithmetic series: k = j * c 0 + c 1 Where j is the loop variable j = 0, 1, …, c 0 and c 1 are constants. k is a basic induction variable. Can compute k := k + c 0, replacing multiplication with addition If j increments by d, k increments by d * c 0 Generalization to polynomials in j: all multiplications can be removed. Important for loops over multidimensional arrays

Induction variables For every induction variable, establish a triple (var, incr, init) The loop variable v is (v, 1, v 0 ) Any variable that has a single assignment of the form k := j* c 0 + c 1 is an induction variable with (j, c 0 * incr j, c 1 + c 0 * j 0 ) Note that c 0 * incr j is a static constant. Insert in loop pre-header: k := c 0 * j 0 + c 1 Insert after incrementing j: k := k + c 0 * incr j Remove original assignment to k

Global constant propagation Domain is set of values, not bit-vector. Status of a variable is (c, non-const, unknown) Like common subexpression elimination, but instead of intersection, define a merge operation: Merge (c, unknown) = c Merge (non-const, anything) = non-const Merge (c1, c2) = if c1 = c2 then c1 else non-const In (b) = Merge { out (p) } forall p ε pred (b) Initially all variables are unknown, except for explicit constant assignments