Compiler Principles Fall 2015-2016 Compiler Principles Lecture 8: Dataflow & Optimizations 1 Roman Manevich Ben-Gurion University of the Negev.

Slides:



Advertisements
Similar presentations
SSA and CPS CS153: Compilers Greg Morrisett. Monadic Form vs CFGs Consider CFG available exp. analysis: statement gen's kill's x:=v 1 p v 2 x:=v 1 p v.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Synopsys University Courseware Copyright © 2012 Synopsys, Inc. All rights reserved. Compiler Optimization and Code Generation Lecture - 3 Developed By:
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) SSA Guo, Yao.
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
Compilation 2011 Static Analysis Johnni Winther Michael I. Schwartzbach Aarhus University.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
Data-Flow Analysis Framework Domain – What kind of solution is the analysis looking for? Ex. Variables have not yet been defined – Algorithm assigns a.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
Components of representation Control dependencies: sequencing of operations –evaluation of if & then –side-effects of statements occur in right order Data.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Dataflow Analysis Introduction Guo, Yao Part of the slides are adapted from.
Program Representations. Representing programs Goals.
Compilation (Semester A, 2013/14) Lecture 12: Abstract Interpretation Noam Rinetzky Slides credit: Roman Manevich, Mooly Sagiv and Eran Yahav.
Lecture 02 – Structural Operational Semantics (SOS) Eran Yahav 1.
Common Sub-expression Elim Want to compute when an expression is available in a var Domain:
Representing programs Goals. Representing programs Primary goals –analysis is easy and effective just a few cases to handle directly link related things.
CS 536 Spring Global Optimizations Lecture 23.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
4/23/09Prof. Hilfinger CS 164 Lecture 261 IL for Arrays & Local Optimizations Lecture 26 (Adapted from notes by R. Bodik and G. Necula)
Chair of Software Engineering Fundamentals of Program Analysis Dr. Manuel Oriol.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
Data Flow Analysis Compiler Design October 5, 2004 These slides live on the Web. I obtained them from Jeff Foster and he said that he obtained.
1 Program Analysis Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
Intermediate Code. Local Optimizations
Improving Code Generation Honors Compilers April 16 th 2002.
Recap from last time: live variables x := 5 y := x + 2 x := x + 1 y := x y...
Prof. Fateman CS164 Lecture 211 Local Optimizations Lecture 21.
Compiler Construction A Compulsory Module for Students in Computer Science Department Faculty of IT / Al – Al Bayt University Second Semester 2008/2009.
Describing Syntax and Semantics
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
PSUCS322 HM 1 Languages and Compiler Design II IR Code Optimization Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU.
Program Analysis Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Precision Going back to constant prop, in what cases would we lose precision?
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Software (Program) Analysis. Automated Static Analysis Static analyzers are software tools for source text processing They parse the program text and.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 2: Operational Semantics I Roman Manevich Ben-Gurion University.
COP 4620 / 5625 Programming Language Translation / Compiler Writing Fall 2003 Lecture 10, 10/30/2003 Prof. Roy Levow.
1 Code Generation Part II Chapter 9 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Compiler Principles Winter Compiler Principles Global Optimizations Mayer Goldberg and Roman Manevich Ben-Gurion University.
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
1 Control Flow Analysis Topic today Representation and Analysis Paper (Sections 1, 2) For next class: Read Representation and Analysis Paper (Section 3)
CS412/413 Introduction to Compilers Radu Rugina Lecture 18: Control Flow Graphs 29 Feb 02.
1 Control Flow Graphs. 2 Optimizations Code transformations to improve program –Mainly: improve execution time –Also: reduce program size Can be done.
Compilation Lecture 8 Abstract Interpretation Noam Rinetzky 1.
Compiler Principles Fall Compiler Principles Lecture 11: Loop Optimizations Roman Manevich Ben-Gurion University.
Compiler Principles Fall Compiler Principles Lecture 7: Lowering Correctness Roman Manevich Ben-Gurion University of the Negev.
Compiler Principles Fall Compiler Principles Lecture 8: Intermediate Representation Roman Manevich Ben-Gurion University.
Compiler Principles Fall Compiler Principles Exercise Set: Lowering and Formal Semantics Roman Manevich Ben-Gurion University of the Negev.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 8: Static Analysis II Roman Manevich Ben-Gurion University.
Compiler Principles Fall Compiler Principles Lecture 9: Dataflow & Optimizations 2 Roman Manevich Ben-Gurion University of the Negev.
Data Flow Analysis Suman Jana
Fall Compiler Principles Lecture 6: Dataflow & Optimizations 1
Fall Compiler Principles Lecture 8: Loop Optimizations
Topic 10: Dataflow Analysis
University Of Virginia
Optimizations Noam Rinetzky
Fall Compiler Principles Lecture 10: Global Optimizations
Fall Compiler Principles Lecture 10: Loop Optimizations
Interval Partitioning of a Flow Graph
Data Flow Analysis Compiler Design
Fall Compiler Principles Lecture 6: Dataflow & Optimizations 1
Presentation transcript:

Compiler Principles Fall Compiler Principles Lecture 8: Dataflow & Optimizations 1 Roman Manevich Ben-Gurion University of the Negev

Tentative syllabus Front End Scanning Top-down Parsing (LL) Bottom-up Parsing (LR) Intermediate Representation Lowering Operational Semantics Lowering Correctness Optimizations Dataflow Analysis Loop Optimizations Code Generation Register Allocation 2

Previously The need for Intermediate Representations – Three-Address Code Lowering abstract syntax trees (AST) in While to IL Operational semantics – While – natural semantics (big step) – IL – small step semantics Correctness of lowering – Equivalence of expressions and IL code – Equivalence of statements and IL code – Proving correctness by structural induction on ASTs 3

While syntax A  n | x | A ArithOp A | ( A ) ArithOp  - | + | * | / B  true | false | A = A | A  A |  B | B  B | ( B ) S  x := A | skip | S ; S | { S } | if B then S else S | while B S 4 n  NumNumerals x  VarProgram variables

IL syntax V  n | x R  V Op V | V Op  - | + | * | / | = |  | > | … C  l : skip | l : x := R | l : Goto l’ | l : IfZ x Goto l’ | l : IfNZ x Goto l’ IR  C + 5 n  NumNumerals l  Num Labels x  Temp  VarTemporaries and variables

agenda Extending While/IL syntax with procedures Introduction to optimizations Formalisms for program analysis – Basic blocks – Control flow graphs Dataflow analyses and optimizations – Live variables  dead code elimination – In recitation: Available expressions  common sub-expression elimination + copy propagation 6

7 Adding procedures

While+procedures syntax A  n | x | A ArithOp A | ( A ) | f ( A * ) ArithOp  - | + | * | / B  true | false | A = A | A  A |  B | B  B | ( B ) S  x := A | skip | S ; S | { S } | if B then S else S | while B S | A F  f ( x * ) { S } P ::= F + 8 n  NumNumerals x  VarProgram variables f Function identifiers, including main Can be used to call side-effecting functions and to return values from functions

IL+procedures syntax V  n | x R  V Op V | V | Call f( x * ) | p n Op  - | + | * | / | = |  | > | … C  l : skip | l : x := R | l : Goto l’ | l : IfZ x Goto l’ | l : IfNZ x Goto l’ | l : Call f( x * ) | l : Ret x IR  C + 9 n  NumNumerals l  Num Labels and function identifiers x  Var  Temp  ParamsVariables, temporaries, and parameters f Function identifiers function parameter n

IL notation convenience We will not explicitly label each command We will use textual labels (e.g., L0, Lentry ) We may write L0: instead of L0: skip 10

11 Introduction to optimizations

Optimization points 12 source code Front end IR Code generator target code User profile program change algorithm Compiler apply IR optimizations Compiler register allocation instruction selection peephole transformations today and next week

Overview of IR optimization Formalisms and Terminology – Control-flow graphs – Basic blocks Local optimizations (won’t cover this year) – Optimizing small pieces of a function Global optimizations – Optimizing functions as a whole The dataflow framework – Defining and implementing a wide class of optimizations 13

Semantics-preserving optimizations An optimization is semantics-preserving if it does not alter the semantics of the original program Examples: – Eliminating unnecessary statements – Computing values that are known statically at compile-time instead of runtime – Evaluating constant expressions outside of a loop instead of inside Non-examples: – Reordering side-effecting computations – Replacing bubble sort with quicksort (why?) The optimizations we will consider in this class are all semantics-preserving How can we find opportunities for optimizations? 14

Program analysis In order to optimize a program, the compiler has to be able to reason about the properties of that program An analysis is called sound if it never asserts an incorrect fact about a program All the analyses we will discuss in this class are sound – (Why?) 15

Soundness 16 if (y < 5) x := 137; else x := 42; Print(x); “At this point in the program, x holds some integer value”

Soundness 17 if (y < 5) x := 137; else x := 42; Print(x); “At this point in the program, x is either 137 or 42”

Soundness 18 if (y < 5) x := 137; else x := 42; Print(x); “At this point in the program, x is 137”

Soundness 19 if (y < 5) x := 137; else x := 42; Print(x); “At this point in the program, x is either 137, 42, or 271”

20 Control flow graphs

Visualizing IR 21 main:t0 := Call ReadInteger() a := t0 t1 := Call ReadInteger() b := t1 L0:t2 := 0 t3 := b == t2 t4 := 0 t5 := t3 = t4 IfZ t5 Goto L1 c := a a := b t6 := c % a b := t6 Goto L0 L1:Call PrintInt()

Visualizing IR 22 main:t0 := Call ReadInteger() a := t0 t1 := Call ReadInteger() b := t1 L0:t2 := 0 t3 := b == t2 t4 := 0 t5 := t3 = t4 IfZ t5 Goto L1 c := a a := b t6 := c % a b := t6 Goto L0 L1:Call PrintInt()

Visualizing IR 23 main:t0 := Call ReadInteger() a := t0 t1 := Call ReadInteger() b := t1 L0:t2 := 0 t3 := b == t2 t4 := 0 t5 := t3 = t4 IfZ t5 Goto L1 c := a a := b t6 := c % a b := t6 Goto L0 L1:Call PrintInt() t0 := Call ReadInteger() a := t0 t1 := Call ReadInteger() b := t1 L0:t2 := 0 t3 := b == t2 t4 := 0 t5 := t3 = t4 IfZ t5 Goto L1 c := a a := b t6 := c % a b := t6 Goto L0 Call PrintInt start end

Basic blocks A basic block is a sequence of IR instructions where – There is exactly one label where control enters the sequence, which must be at the start of the sequence – There is exactly one label where control leaves the sequence, which must be at the end of the sequence Informally: a sequence of instructions that always execute as a group 24

Control-flow graphs A control-flow graph (CFG) is a graph of the basic blocks in a function – From here on CFG stands for “control-flow graph” and not “context free grammar” Each edge from one basic block to another indicates that control can flow from the end of the first block to the start of the second block Dedicated nodes for the start and end of a function Program executions correspond to paths the executions of the program 25

Scope of optimizations An optimization is local if it works on just a single basic block An optimization is global if it works on an entire control-flow graph An optimization is interprocedural if it works across the control-flow graphs of multiple functions – We won't talk about this in this course 26

Optimizations and analyses Most optimizations are only possible given some analysis of the program's behavior In order to implement an optimization, we will talk about the corresponding program analyses Program analysis = algorithm that processes program and infers facts – Sound facts = facts that hold for all program executions – Sound analysis = program analysis that infers only sound facts 27

28 dead code elimination

Definition An assignment to a variable v is called dead if the value of that assignment is never read anywhere Dead code elimination removes dead assignments from IR Determining whether an assignment is dead depends on assignments preceding it 29

Dead code elimination example a := b c := a d := a + b e := d d := a f := e 30 Print(d) Can we remove this statement?

Live variables The analysis corresponding to dead code elimination is called liveness analysis A variable is live at a point in a program if later in the program its value will be read before it is written to again 31

Optimizing via liveness analysis Dead code elimination works by computing liveness for each variable, then eliminating assignments to dead variables Dead code elimination – If x := R { v 1,…,v k } – And x  { v 1,…,v k } and R has no side-effects (not a function call) – We can eliminate x := R (if R has side-effects, we can transform the assignment to just R ) 32

33 Live variable analysis

Plan Define liveness analysis on single statements Define liveness analysis on basic blocks Define liveness analysis on acyclic CFGs Define liveness on arbitrary CFGs 34

The goal Given an IL command C and set of live variables LV out after C executes, compute the set of live variables LV in before C executes We will call this function the transformer of C and write it as LV in = F[C]( LV out ) 35 lab: C LV in {... } LV out {... }

DEF/USE For an IL statement C, we define – DEF – the variables possibly modified by C, – USE – the variables read by C 36 Command TypeDEFUSE skip {} x := R {x}{x}Vars( R ) Goto l’ {} IfZ x Goto l’ {}{x}{x} IfNZ x Goto l’ {}{x}{x} Call f( x 1,…, x n ){}{ x 1,…, x n } Ret x {}{x}{x}

Liveness transformer Given an IL command C and set of live variables LV out after C executes, compute the set of live variables LV in before C executes We define the transformer of C: LV in = ( LV out \ DEF(C))  USE(C) 37 lab: C LV in {... } LV out {... } x := x+1

38 Analyzing and optimizing basic blocks

Analyzing basic blocks Let P = C 1,…, C n be a basic block and LV n be the initial set of live variables after C n To compute LV 1,…, LV n-1 simply apply F[C i ] for i=n to 1 39 C1C1 C2C2 … LV 3 LV 2 LV n-1 CnCn LV n LV 1

Liveness example a := b; c := a; d := a + b; e := d; d := a; f := e; { b, d, e } { a, b, e } { a, b, d } { a, b } { b } { b, d } Which statements are dead? 40 Foo(b, d) { }

Dead code elimination a := b; c := a; d := a + b; e := d; d := a; f := e; { b, d, e } { a, b, e } { a, b, d } { a, b } { b } { b, d } 41 Foo(b, d) { } Can we find more dead assignments now?

Second liveness analysis a := b; d := a + b; e := d; d := a; { a, b } { a, b, d } { a, b } { b } { b, d } Which statements are dead? 42 Foo(b, d) { }

Second dead code elimination a := b; d := a + b; e := d; d := a; { a, b } { a, b, d } { a, b } { b } { b, d } 43 Foo(b, d) { } Can we find more dead assignments now?

Third liveness analysis a := b; d := a + b; d := a; { a, b } { b } { b, d } Which statements are dead? 44 Foo(b, d) { }

Third dead code elimination a := b; d := a + b; d := a; { a, b } { b } { b, d } 45 Foo(b, d) { } Can we find more dead assignments now?

Fourth liveness analysis a := b; d := a; { a, b } { b } { b, d } No more dead statements 46 Foo(b, d) { } We can eliminate the two remaining assignments via copy propagation

47 Combined algorithm for optimizing basic blocks

A combined algorithm Start with initial live variables at end of block Traverse statements from end to beginning For each statement – If assigns to dead variables – eliminate it – Otherwise, compute live variables before statement and continue in reverse 48

A combined algorithm a := b; c := a; d := a + b; e := d; d := a; f := e; { b, d } 49

A combined algorithm a := b; c := a; d := a + b; e := d; d := a; f := e; { b, d } 50

A combined algorithm a := b; c := a; d := a + b; e := d; d := a; { b, d } 51

A combined algorithm a := b; c := a; d := a + b; e := d; d := a; { b, d } { a, b } 52

A combined algorithm 53 a := b; c := a; d := a + b; e := d; d := a; { b, d } { a, b }

A combined algorithm 54 a := b; c := a; d := a + b; d := a; { b, d } { a, b }

A combined algorithm a := b; c := a; d := a + b; d := a; { b, d } { a, b } 55

A combined algorithm a := b; c := a; d := a; { b, d } { a, b } 56

A combined algorithm a := b; c := a; d := a; { b, d } { a, b } 57

A combined algorithm a := b; d := a; { b, d } { a, b } 58

A combined algorithm a := b; d := a; { b, d } { a, b } 59 { b }

A combined algorithm a := b; d := a; 60

Alternative approach Define strong liveness: a variable v is live at label lab if there exists a path from that label where v is used in a live command before being overwritten Adjust transformer – Assume all assignments are dead (until proven otherwise) – If assignment may be live run usual liveness transformer, otherwise run F[ skip ] Run analysis with strong liveness transformer Eliminate dead code (no need to repeat) 61

62 Global liveness analysis

Global analysis technical challenges Need to be able to handle multiple predecessors/successors for a basic block – Join operator Need to be able to handle multiple paths through the control-flow graph – may need to iterate multiple times to compute final value – but the analysis still needs to terminate! – Chaotic iteration (fixed point iteration) Need to be able to assign each basic block a reasonable default value for before we've analyzed it – Bottom value 63

Global analysis technical challenges Need to be able to handle multiple predecessors/successors for a basic block – Join operator Need to be able to handle multiple paths through the control-flow graph – may need to iterate multiple times to compute final value – but the analysis still needs to terminate! – Chaotic iteration (fixed point iteration) Need to be able to assign each basic block a reasonable default value for before we've analyzed it – Bottom value 64

Global dead code elimination Local dead code elimination needed to know what variables were live on exit from a basic block This information can only be computed as part of a global analysis How do we extend our liveness analysis to handle a CFG? 65

66 Global liveness analysis: CFGs without loops

CFGs without loops 67 x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start Foo(x,y) End

CFGs without loops 68 Foo(x,y) x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start {x, y} {a, b, c, d} {b, c, d} {a, b, c, d} {a, c, d} ? Which variables may be live on some execution path? End

Which assignments are redundant? 69 x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start {x, y} {a, b, c, d} {b, c, d} {a, b, c, d} {a, c, d} Foo(x,y) {x, y} End

CFGs without loops 70 x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start {x, y} {a, b, c, d} {b, c, d} {a, b, c, d} {a, c, d} Foo(x,y) {x, y} End

CFGs without loops 71 x := a + b; y := c + d; a := b + c; b := c + d; Start Foo(x,y) {x, y} End

CFGs without loops 72 x := a + b; y := c + d; a := b + c; b := c + d; Start Foo(x,y) {x, y} End

Major changes – part 1 In a local analysis, each statement has exactly one predecessor In a global analysis, each statement may have multiple predecessors A global analysis must have some means of combining information from all predecessors of a basic block 73

Combining values 74 x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start {x, y} {a, b, c, d} {b, c, d} {c, d} Need to combine currently-computed value with new value Foo(x,y) {x, y} End

Combining values 75 x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start {x, y} {a, b, c, d} {b, c, d} {a, b, c, d} {c, d} Foo(x,y) {x, y} End

Combining values 76 x := a + b; y := c + d; y := a + b; x := c + d; a := b + c; b := c + d; e := c + d; Start {x, y} {a, b, c, d} {b, c, d} {a, b, c, d} {a, c, d} Foo(x,y) {x, y} End

77 Global liveness analysis: handling loops

Major changes – part 2 In a local analysis, there is only one possible path through a basic block In a global analysis, there may be many paths through a CFG May need to recompute values multiple times as more information becomes available Need to be careful when doing this not to loop infinitely! – (More on that later) 78

CFGs with loops 79 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; IfZ... Start Ret a End

How do we use join here? 80 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; IfZ... Start ? Ret a {a} End

Major changes – part 3 In a local analysis, there is always a well defined “first” statement to begin processing In a global analysis with loops, every basic block might depend on every other basic block To fix this, we need to assign initial values to all of the blocks in the CFG 81

CFGs with loops - initialization 82 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} Ret a {a} End

CFGs with loops - iteration 83 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} {a} Ret a {a} End

CFGs with loops - iteration 84 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} {a, b, c} {a} Ret a {a} End

CFGs with loops - iteration 85 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} {a, b, c} {a} {a, b, c} Ret a {a} End

CFGs with loops - iteration 86 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} {b, c} {} {a, b, c} {a} {a, b, c} Ret a {a} End

CFGs with loops - iteration 87 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} {b, c} {} {a, b, c} {a} {a, b, c} {b, c} Ret a {a} End

CFGs with loops - iteration 88 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {} {b, c} {c, d} {a, b, c} {a} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 89 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 90 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 91 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a, c, d} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 92 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a, c, d} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 93 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a, c, d} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 94 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a, c, d} {a, b, c} {b, c} {a, b, c} Ret a {a} End

CFGs with loops - iteration 95 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {c, d} {a, b, c} {a, c, d} {a, b, c} Ret a {a} End

CFGs with loops - iteration 96 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {a, c, d} {a, b, c} {a, c, d} {a, b, c} Ret a {a} End

CFGs with loops – fixed point 97 a := a + b; d := b + c; c := a + b; a := b + c; d := a + c; b := c + d; c := c + d; Start {a, b} {b, c} {a, c, d} {a, b, c} {a, c, d} {a, b, c} Ret a {a} End

98 formalizing Global liveness analysis

Global liveness analysis Initially, set IN[C] = { } for each command C Set IN[end] to the set of variables known to be live on exit ({} unless special assumptions) – Language-specific knowledge Repeat until no changes occur: – For each command C in any order you'd like: Set OUT[C] to be the union of IN[Cp] for each successor Cp of C Set IN[C] to (OUT[C] \ DEF(C))  USE(C) Yet another fixed-point iteration! 99

Global liveness analysis 100 a:=b+c s2s3 IN[s2]IN[s3] OUT[s]=IN[s2]  IN[s3] IN[s]=(OUT[s] – {a})  {b, c}

Why does this work? To show correctness, we need to show that – The algorithm eventually terminates, and – When it terminates, it has a sound answer 101

Termination argument …? 102

Termination argument Once a variable is discovered to be live during some point of the analysis, it always stays live Only finitely many variables and finitely many places where a variable can become live 103

Soundness argument (sketch) Each individual rule, applied to some set, correctly updates liveness in that set When computing the union of the set of live variables, a variable is only live if it was live on some path leaving the statement So, locally, every step in the algorithm is correct Does local correctness imply global correctness? – Yes under some conditions – Monotone dataflow 104

105 Semantic equivalence

Let ElimDeadCode(P)=P’ be a dead code elimination optimization applied to program P We define semantic equivalence between P and P’ as follows Let V=ImportantVars(P) be the variables appear in Ret and Call commands We define equivalence between the states of P and P’ by equality on V Lemma: ImportantVars(P) = ImportantVars(P’) Definition: P and P’ are equivalent if for each input state m the states generated at corresponding Ret command, Call command, and at the output, P and P’ are equivalent 106

Example For m = [pc  1, a  0,b  1, c  2, d  3, e  4, e  5] At the function call for P we have [pc  6, a  1,b  1, c  1, d  1, e  2, f  2]| {d} = [d  1] for P’ we have [pc  3, a  1,b  1, c  2, d  1, e  4, f  5] {d} = [d  1] At the output:  P  m = [pc  7, a  1,b  1, c  1, d  1, e  2, f  2]  P’  m = [pc  4, a  1,b  1, c  2, d  1, e  4, f  5] (  P  m)| {d} = (  P’  m)| {d} = [d  1] 107 a := b c := a d := a + b e := d d := a f := e Print(d) P a := b d := a Print(d) P’ ImportantVars(P)={d}

Next lecture: Dataflow Analysis Framework