Cse322, Programming Languages and Compilers 1 6/14/2015 Lecture #13, May 15, 2007 Control flow graphs, Liveness using data flow, dataflow equations, Using.

Slides:



Advertisements
Similar presentations
SSA and CPS CS153: Compilers Greg Morrisett. Monadic Form vs CFGs Consider CFG available exp. analysis: statement gen's kill's x:=v 1 p v 2 x:=v 1 p v.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Register Usage Keep as many values in registers as possible Register assignment Register allocation Popular techniques – Local vs. global – Graph coloring.
P3 / 2004 Register Allocation. Kostis Sagonas 2 Spring 2004 Outline What is register allocation Webs Interference Graphs Graph coloring Spilling Live-Range.
Register allocation Morgensen, Torben. "Register Allocation." Basics of Compiler Design. pp from (
Lecture 24 MAS 714 Hartmut Klauck
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
COMPILERS Register Allocation hussein suleman uct csc305w 2004.
Graph-Coloring Register Allocation CS153: Compilers Greg Morrisett.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
SSA.
Data-Flow Analysis Framework Domain – What kind of solution is the analysis looking for? Ex. Variables have not yet been defined – Algorithm assigns a.
Stanford University CS243 Winter 2006 Wei Li 1 Register Allocation.
Register Allocation CS 671 March 27, CS 671 – Spring Register Allocation - Motivation Consider adding two numbers together: Advantages: Fewer.
CompSci 102 Discrete Math for Computer Science April 19, 2012 Prof. Rodger Lecture adapted from Bruce Maggs/Lecture developed at Carnegie Mellon, primarily.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Dataflow Analysis Introduction Guo, Yao Part of the slides are adapted from.
Program Representations. Representing programs Goals.
1 Data flow analysis Goal : collect information about how a procedure manipulates its data This information is used in various optimizations For example,
6/9/2015© Hal Perkins & UW CSEU-1 CSE P 501 – Compilers SSA Hal Perkins Winter 2008.
More Dataflow Analysis CS153: Compilers Greg Morrisett.
Cse322, Programming Languages and Compilers 1 6/15/2015 Lecture #12, May 15, 2007 Basic Blocks, Control flow graphs, Liveness using data flow, dataflow.
CS 536 Spring Global Optimizations Lecture 23.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
PSUCS322 HM 1 Languages and Compiler Design II Basic Blocks Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU Spring.
Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.
Liveness Analysis Mooly Sagiv Schrierber Wed 10:00-12:00 html://
Data Flow Analysis Compiler Design Nov. 3, 2005.
Prof. Bodik CS 164 Lecture 171 Register Allocation Lecture 19.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
Register Allocation (via graph coloring)
Data Flow Analysis Compiler Design October 5, 2004 These slides live on the Web. I obtained them from Jeff Foster and he said that he obtained.
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Register Allocation (via graph coloring). Lecture Outline Memory Hierarchy Management Register Allocation –Register interference graph –Graph coloring.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
1 Liveness analysis and Register Allocation Cheng-Chia Chen.
Improving Code Generation Honors Compilers April 16 th 2002.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Loops Guo, Yao.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
4/29/09Prof. Hilfinger CS164 Lecture 381 Register Allocation Lecture 28 (from notes by G. Necula and R. Bodik)
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Precision Going back to constant prop, in what cases would we lose precision?
Data Flow Analysis Compiler Baojian Hua
1 Code Generation Part II Chapter 9 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
Dataflow Analysis Topic today Data flow analysis: Section 3 of Representation and Analysis Paper (Section 3) NOTE we finished through slide 30 on Friday.
CS412/413 Introduction to Compilers Radu Rugina Lecture 18: Control Flow Graphs 29 Feb 02.
1 Control Flow Graphs. 2 Optimizations Code transformations to improve program –Mainly: improve execution time –Also: reduce program size Can be done.
2/22/2016© Hal Perkins & UW CSEP-1 CSE P 501 – Compilers Register Allocation Hal Perkins Winter 2008.
Great Theoretical Ideas in Computer Science for Some.
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
Data Flow Analysis II AModel Checking and Abstract Interpretation Feb. 2, 2011.
COMPSCI 102 Introduction to Discrete Mathematics.
1 Liveness analysis and Register Allocation Cheng-Chia Chen.
COMPILERS Liveness Analysis hussein suleman uct csc3003s 2009.
Data Flow Analysis Suman Jana
Topic 10: Dataflow Analysis
University Of Virginia
Data Flow Analysis Compiler Design
Compiler Construction
Lecture 17: Register Allocation via Graph Colouring
Discrete Mathematics for Computer Science
COMPILERS Liveness Analysis
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Presentation transcript:

Cse322, Programming Languages and Compilers 1 6/14/2015 Lecture #13, May 15, 2007 Control flow graphs, Liveness using data flow, dataflow equations, Using fixed-points. Dynamic Liveness, The halting problem, Register Interference Graphs, Graph coloring, Flow graph relations dominators.

Cse322, Programming Languages and Compilers 2 6/14/2015 Assignments Project #2 is Due on Monday, May 22, 2006 –The project template is ready. –Please notify me, and I will you the template. Reading – Same as on Monday –chapter 9. Sections 9.1 and 9.2 Liveness analysis. pp –Possible quiz next Monday

Cse322, Programming Languages and Compilers 3 6/14/2015 Control Flow Graphs To assign registers on a per-procedure basis, need to perform liveness analysis on entire procedure, not just basic blocks. To analyze the properties of entire procedures with multiple basic blocks, we use a control-flow graph. In simplest form, control flow graph has one node per statement, and an edge from n 1 to n 2 if control can ever flow directly from statement 1 to statement 2.

Cse322, Programming Languages and Compilers 4 6/14/2015 We write pred[n] for the set of predecessors of node n, and succ[n] for the set of successors. (In practice, usually build control-flow graphs where each node is a basic block, rather than a single statement.) Example routine: a = 0 L: b = a + 1 c = c + b a = b * 2 if a < N goto L return c

Cse322, Programming Languages and Compilers 5 6/14/2015 Example | 1 ▼ | a = 0 | ` ’ | | | | | 2 C ▼ | | | b = a + 1 | | ` ’ | | | 3 ▼ | | | c = c + b | | ` ’ | | | 4 ▼ | | | a = b * 2 | | ` ’ | | | 5 ▼ | | | a < N | | ` ’ | | T | | F | ` ’ | 6 ▼ | return c | ` ’ pred[1] = ? pred[2] = {1,5} pred[3] = {2} pred[4] = {3} pred[5] = {4} pred[6] = {5} succ[1] = {2} succ[2] = {3} succ[3] = {4} succ[4] = {5} succ[5] = {6,2} succ[6] = {}

Cse322, Programming Languages and Compilers 6 6/14/2015 Liveness Analysis using Dataflow Working from the future to the past, we can determine the edges over which each variable is live. In the example: b is live on 2 → 3 and on 3 → 4. a is live from on 1 → 2, on 4 → 5, and on 5 → 2 (but not on 2 → 3 → 4). c is live throughout (including on entry → 1). We can see that two registers suffice to hold a, b and c.

Cse322, Programming Languages and Compilers 7 6/14/2015 Dataflow equations We can do liveness analysis (and many other analyses) via dataflow analysis. A node defines a variable if its corresponding statement assigns to it. A node uses a variable if its corresponding statement mentions that variable in an expression (e.g., on the rhs of assignment). –Recall our ML function varsOf

Cse322, Programming Languages and Compilers 8 6/14/2015 Definitions For any variable v define: –defV[v] = set of graph nodes that define v –useV[v] = set of graph nodes that use v Similarly, for any node n, define –defN[n] = set of variables defined by node n –useN[n] = set of variables used by node n

Cse322, Programming Languages and Compilers 9 6/14/2015 Example | 1 ▼ | a = 0 | ` ’ | | | | | 2 C ▼ | | | b = a + 1 | | ` ’ | | | 3 ▼ | | | c = c + b | | ` ’ | | | 4 ▼ | | | a = b * 2 | | ` ’ | | | 5 ▼ | | | a < N | | ` ’ | | T | | F | ` ’ | 6 ▼ | return c | ` ’ defV[a] = {1,4} defV[b] = {2} defV[c] = {?,3} useV[a] = {2,5} useV[b] = {3,4} useV[c] = {3,6} defN[1] = {a} defN[2] = {b} defN[3] = {c} defN[4] = {a} defN[5] = {} defN[6] = {} useN[1] = {} useN[2] = {a} useN[3] = {c,b} useN[4] = {b} useN[5] = {a} useN[6] = {c}

Cse322, Programming Languages and Compilers 10 6/14/2015 Setting up equations –A variable is live on an edge if there is a directed path from that edge to a use of the variable that does not go through any def. –A variable is live-in at a node if it is live on any in-edge of that node; –It is live-out if it is live on any out-edge. Then the following equations hold live-in[n] = useN[n] U (live-out[n] – defN[n]) live-out[n] = U s  succ(n) live-in[s]

Cse322, Programming Languages and Compilers 11 6/14/2015 Computing We want the least fixed point of these equations: the smallest live-in and live-out sets such that the equations hold. We can find this solution by iteration: –Start with empty sets for live-in and live-out –Use equations to add variables to sets, one node at a time. –Repeat until sets don't change any more. Adding additional variables to the sets is safe, as long as the sets still obey the equations, but inaccurately suggests that more live variables exist than actually do.

Cse322, Programming Languages and Compilers 12 6/14/2015 The Problem We want to compute: live-in and live-out We know: by using: live-in[n] = useN[n] U (live-out[n] – defN[n]) live-out[n] = U s  succ(n) live-in[s] defN[1] = {a} defN[2] = {b} defN[3] = {c} defN[4] = {a} defN[5] = {} defN[6] = {} useN[1] = {} useN[2] = {a} useN[3] = {c,b} useN[4] = {b} useN[5] = {a} useN[6] = {c} succ[1] = {2} succ[2] = {3} succ[3] = {4} succ[4] = {5} succ[5] = {6,2} succ[6] = {}

Cse322, Programming Languages and Compilers 13 6/14/2015 Example live-in[n] = useN[n] U (live-out[n] – defN[n]) live-out[n] = U s  succ(n) live-in[s] defN[1] = {a} defN[2] = {b} defN[3] = {c} defN[4] = {a} defN[5] = {} defN[6] = {} useN[1] = {} useN[2] = {a} useN[3] = {c,b} useN[4] = {b} useN[5] = {a} useN[6] = {c} succ[1] = {2} succ[2] = {3} succ[3] = {4} succ[4] = {5} succ[5] = {6,2} succ[6] = {} Lets do node 5 live-out[5] = { }live-in[5]={ } live-out[5] = U s  {6,2} live-in[s] so now we need to do live-in[6] and live-in[2]

Cse322, Programming Languages and Compilers 14 6/14/2015 Solution For correctness, order in which we take nodes doesn't matter, but it turns out to be fastest to take them in roughly reverse order: – live-in[n] = use[n] U (live-out[n] – def[n]) – live-out[n] = U s  succ(n) live-in[s] nodeuse def1 st out in 2 nd out in 3 rd out in 6c c c c 5ac acac 4b aac bc 3bc bbc 2a bbc ac 1 aac c

Cse322, Programming Languages and Compilers 15 6/14/2015 Implementation issues Algorithm always terminates, because each iteration must enlarge at least one set, but sets are limited in size (by total number of variables). Time complexity is O(N 4 ) worst-case, but between O(N) and O(N 2 ) in practice. Typically do analysis using entire basic blocks as nodes. Can compute liveness for all variables in parallel (as here) or independently for each variable, on demand. Sets can be represented as bit vectors or linked lists; best choice depends on set density.

Cse322, Programming Languages and Compilers 16 6/14/2015 ML code First we need operations over sets –union –setMinus –normalization fun union [] ys = ys | union (x::xs) ys = if List.exists (fn z => z=x) ys then union xs ys else x :: (union xs ys)

Cse322, Programming Languages and Compilers 17 6/14/2015 SetMinus fun remove x [] = [] | remove x (y::ys) = if x=y then ys else y :: remove x ys; fun setMinus xs [] = xs | setMinus xs (y::ys) = setMinus (remove y xs) ys

Cse322, Programming Languages and Compilers 18 6/14/2015 Normalization fun sort' comp [] ans = ans | sort' comp [x] ans = x :: ans | sort' comp (x::xs) ans = let fun LE x y = case comp(x,y) of GREATER => false | _ => true fun GT x y = case comp(x,y) of GREATER => true | _ => false val small = List.filter (GT x) xs val big = List.filter (LE x) xs in sort' comp small (x::(sort' comp big ans)) end; fun nub [] = [] | nub [x] = [x] | nub (x::y::xs) = if x=y then nub (x::xs) else x::(nub (y::xs)); fun norm x = nub (sort' String.compare x [])

Cse322, Programming Languages and Compilers 19 6/14/2015 liveness algorithm fun computeInOut succ defN useN live_in live_out range = let open Array fun out n = let val nexts = sub(succ,n) fun getLive x = sub(live_in,x) val listOflists = map getLive nexts val all = norm(List.concat listOflists) in update(live_out,n,all) end fun inF n = let val ans = union (sub(useN,n)) (setMinus (sub(live_out,n)) (sub(defN,n))) in update(live_in,n,norm ans) end fun run i = (out i; inF i) in map run range end; Array access functions x[i] == sub(x,i) x[i] = e == update(x,i,e)

Cse322, Programming Languages and Compilers 20 6/14/2015 val it = [|[],[],[],[],[],[],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a"],["b","c"],["b"],[],["c"]|] val it = [|[],[],[],[],[],["a"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a"],["b","c"],["b"],[],["c"]|] val it = [|[],["a"],["b","c"],["b"],[],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a","c"],["b","c"],["b"],["a","c"],["c"]|] val it = [|[],["a"],["b","c"],["b"],[],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a","c"],["b","c"],["b"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b"],["a","c"],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],["c"],["a","c"],["b","c"],["b","c"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b"],["a","c"],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],["c"],["a","c"],["b","c"],["b","c"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b","c"],["a","c"],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],["c"],["a","c"],["b","c"],["b","c"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b","c"],["a","c"],["a","c"],[]|]

Cse322, Programming Languages and Compilers 21 6/14/2015 Fixed point algorithm Repeat computeInOut until live_in and live_out remain unchanged after a full iteration. The comparison is expensive. Since we never subtract any thing from one of these arrays, we need only detect when we assign a value to a particular index that is different from the one already there. A full iteration with no changes, means we’ve reached a fixpoint. fun change (array,index,value) = let val old = Array.sub(array,index) in Array.update(array,index,value) ; Bool.not(old=value) end; Returns true only if we’ve made a change

Cse322, Programming Languages and Compilers 22 6/14/2015 Second try fun computeInOut succ defN useN live_in live_out range = let open Array fun out n = let val nexts = sub(succ,n) fun getLive x = sub(live_in,x) val listOflists = map getLive nexts val all = norm(List.concat listOflists) in change(live_out,n,all) end fun inF n = let val ans = union (sub(useN,n)) (setMinus (sub(live_out,n)) (sub(defN,n))) in change(live_in,n,norm ans) end fun run(i,change) = (out i orelse inF i orelse change) in List.foldr run false range end; returns true only if a change has been made iterates over all n and determines if any change. Note change starts at false.

Cse322, Programming Languages and Compilers 23 6/14/2015 Keep applying fun try succ defN useN = let val n = Array.length succ val live_in = Array.array(n,[]:string list) val live_out = Array.array(n,[]:string list) fun repeat () = if (computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]) then repeat () else () in repeat(); (live_out,live_in) end;

Cse322, Programming Languages and Compilers 24 6/14/2015 Static vs Dynamic Liveness | 1 ▼ | a = b*b | ` ' | 2 ▼ | c = a+b | ` ' | 3 ▼ | c >= b ? | ` ' / \ 4 ▼ 5 ▼ | return a | | return c | ` ' ` ' Consider the graph Is a live-out at node 2?

Cse322, Programming Languages and Compilers 25 6/14/2015 Some thoughts It depends on whether control flow ever reaches node 4. A smart compiler could answer no. A smarter compiler could answer similar questions about more complicated programs. But no compiler can ever always answer such questions correctly. This is a consequence of the uncomputability of the Halting Problem. So we must be content with static liveness, which talks about paths of control-flow edges, and is just a conservative approximation of dynamic liveness, which talks about actual execution paths.

Cse322, Programming Languages and Compilers 26 6/14/2015 The Halting Problem Theorem: There is no program H that takes as input any program P and its input X, and (without infinite-looping) returns true if P(X ) halts and false if P(X) infinite-loops. Proof: Suppose there were such an H. From it, construct the function F(Y) = if H(Y,Y) then (while true do ()) else 1 Now consider F(F). –If F(F) halts, then, by the definition of H, H(F,F) is true, so the then clause executes, so F(F) does not halt. –But, if F(F) loops forever, then H(F,F) is false, so the else clause is taken, so F(F) halts. –Hence F(F) halts if any only if it doesn't halt. Since we've reached a contradiction, the initial assumption is wrong: there can be no such H.

Cse322, Programming Languages and Compilers 27 6/14/2015 Consequence Corollary: No program H'(P,X,L) can tell, for any program P, input X, and label L within P, whether L is ever reached on an execution of P on X. Proof: If we had H', we could construct H. Consider a program transformation T that, from any program P constructs a new program by putting a label L at the end of the program, and changing every halt to goto L$. Then H(P,X) = H'(T(P),X,L).

Cse322, Programming Languages and Compilers 28 6/14/2015 Register Interference Graphs Mixing instruction selection and register allocation gets confusing; We need a more systematic way to look at the problem. Initially generate code assuming an infinite number of ``logical'' registers; calculate live ranges Live after instr. ld a,t0 ; a:t0 t0 ld b,t1 ; b:t1 t0 t1 sub t0,t1,t2 ; t:t2 t0 t2 ld c,t3 ; c:t3 t0 t2 t3 sub t0,t3,t4 ; u:t4 t2 t4 add t2,t4,t5 ; v:t5 t4 t5 add t5,t4,t6 ; d:t6 t6 st t6,d Build a register interference graph, which has –a node for each logical register. –an edge between two nodes if the corresponding registers are simultaneously live.

Cse322, Programming Languages and Compilers 29 6/14/2015 Example Live after instr. ld a,t0 ; a:t0 t0 ld b,t1 ; b:t1 t0 t1 sub t0,t1,t2 ; t:t2 t0 t2 ld c,t3 ; c:t3 t0 t2 t3 sub t0,t3,t4 ; u:t4 t2 t4 add t2,t4,t5 ; v:t5 t4 t5 add t5,t4,t6 ; d:t6 t6 st t6,d t0 t1 t2 t3 t4 t5 t6

Cse322, Programming Languages and Compilers 30 6/14/2015 A coloring of a graph is an assignment of colors to nodes such that no two connected nodes have the same color. (Like coloring a map, where nodes=countries and edges connect countries with common border. Suppose we have k physical registers available. Then aim is to color interference graph with k or fewer colors. This implies we can allocate logical registers to physical registers without spilling. In general case, determining whether a graph can be k-colored is hard (N.P. Complete, and hence probably exponential). But a simple heuristic will usually find a k-coloring if there is one. t0 t1 t2 t3 t4 t5 t6

Cse322, Programming Languages and Compilers 31 6/14/2015 Graph Coloring Heuristic 1. Choose a node with fewer than k neighbors. 2. Remove that node. Note that if we can color the resulting graph with k colors, we can also color the original graph, by giving the deleted node a color different from all its neighbors. Repeat until either –There are no nodes with fewer than k neighbors, in which case we must spill; –or –The graph is gone, in which case we can color the original graph by adding the deleted nodes back in one at a time and coloring them.

Cse322, Programming Languages and Compilers 32 6/14/2015 Example: Find a 3- coloring t0 t1 t2 t3 t4 t5 t0 t1 t2 t3 t0 t1 t2 t3 t4 remove 6 remove 5 remove 4 t0 t1 t2 t3 t4 t5 t6

Cse322, Programming Languages and Compilers 33 6/14/2015 t0 t1 t3 remove 2 t0 t1 t2 t3 t0 t3 remove 1 t0 remove 3 t0 t1 t2 t3 t4 t5 t6

Cse322, Programming Languages and Compilers 34 6/14/2015 There cannot be a 2-coloring (why not?). t0 t1 t2 t3 t4 t5 t6

Cse322, Programming Languages and Compilers 35 6/14/2015 More about Flow Graphs Nodes:basic blocks Edges:branches between blocks Example: factorial (1) f := 1 (2) i := 2 (3) if i>n then goto (7) (4) f := f*i (5) i := i+1 (6) goto (3) (7) return

Cse322, Programming Languages and Compilers 36 6/14/2015 Flow Graphs (cont.) (1) f:=1 (2) i:=2 (3) if i>n then goto (7) (#4) (4) f:=f*i (5) i:= i+1 (6) goto (3) (#2) (7) return #1 #2 #3 #

Cse322, Programming Languages and Compilers 37 6/14/2015 Successor – j `succ` i if (i,j) is an edge in the flow graph Predecessor –inverse of successor Dominator – i `dominates` j if i is on every path from 1 (the initial node) to j Immediate dominator –i `idom` j if i `dom` j and there is no other node k such that – k `idom` k and k `dom` j. Flow Graph Relations `succ` 1, 1 `pred` 2, 3 `succ` 2, etc 1 `dom` 2, 1 `dom` 3, 1 `dom` 4; 2 `dom` 3, 2 `dom` 4 1 `idom` 2; 2 `idom` 3, 2 `idom` 4

Cse322, Programming Languages and Compilers 38 6/14/2015 Flow Graph Relations (cont.) Each node has a unique immediate dominator If i ` idom` k then for all m, (m ` dom` k) => m ` dom` i Consequence: –there exists a dominator tree (with the same nodes as the flow graph, but different edges) where each node in the dominator tree has an out-edge only to the nodes it immediately dominates flow graph dominator tree

Cse322, Programming Languages and Compilers 39 6/14/2015 Flow Graph Relations (cont.) The dominator tree edges are not necessarily flow graph edges: original flow graph dominator tree note: every path from 1 to 4 must go through 1, but can go through either 2 or 3. So 2 & 3 do not immediately dominate 4.

Cse322, Programming Languages and Compilers 40 6/14/2015 Flow Graph Relations (cont.) Flow graph application: finding loops. A edge from B to A (in the flow graph) is a back-edge iff A dominates B (i.e. exists a path from A to B in dominator tree). If we remove all back-edges, only forward edges remain. If this graph has no cycles (i.e. it’s a Dag) then the original flow graph is known as a a reducible graph. In a reducible graph: –every loop contains a back edge –there are no jumps from outside into the middle of the loop Non-reducible graph (rare; must use goto):

Cse322, Programming Languages and Compilers 41 6/14/2015 Next time Next time we’ll use flow graphs to implmement some more optimizations.