Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cse322, Programming Languages and Compilers 1 6/14/2015 Lecture #13, May 15, 2007 Control flow graphs, Liveness using data flow, dataflow equations, Using.

Similar presentations


Presentation on theme: "Cse322, Programming Languages and Compilers 1 6/14/2015 Lecture #13, May 15, 2007 Control flow graphs, Liveness using data flow, dataflow equations, Using."— Presentation transcript:

1 Cse322, Programming Languages and Compilers 1 6/14/2015 Lecture #13, May 15, 2007 Control flow graphs, Liveness using data flow, dataflow equations, Using fixed-points. Dynamic Liveness, The halting problem, Register Interference Graphs, Graph coloring, Flow graph relations dominators.

2 Cse322, Programming Languages and Compilers 2 6/14/2015 Assignments Project #2 is Due on Monday, May 22, 2006 –The project template is ready. –Please notify me, and I will email you the template. Reading – Same as on Monday –chapter 9. Sections 9.1 and 9.2 Liveness analysis. pp 433-452 –Possible quiz next Monday

3 Cse322, Programming Languages and Compilers 3 6/14/2015 Control Flow Graphs To assign registers on a per-procedure basis, need to perform liveness analysis on entire procedure, not just basic blocks. To analyze the properties of entire procedures with multiple basic blocks, we use a control-flow graph. In simplest form, control flow graph has one node per statement, and an edge from n 1 to n 2 if control can ever flow directly from statement 1 to statement 2.

4 Cse322, Programming Languages and Compilers 4 6/14/2015 We write pred[n] for the set of predecessors of node n, and succ[n] for the set of successors. (In practice, usually build control-flow graphs where each node is a basic block, rather than a single statement.) Example routine: a = 0 L: b = a + 1 c = c + b a = b * 2 if a < N goto L return c

5 Cse322, Programming Languages and Compilers 5 6/14/2015 Example | 1 ▼.-------. | a = 0 | `-------’ | |.------. | | | 2 C ▼ |.-----------. | | b = a + 1 | | `-----------’ | | | 3 ▼ |.-----------. | | c = c + b | | `-----------’ | | | 4 ▼ |.-----------. | | a = b * 2 | | `-----------’ | | | 5 ▼ |.-------. | | a < N | | `-------’ | | T | | F | `-------’ | 6 ▼.----------. | return c | `----------’ pred[1] = ? pred[2] = {1,5} pred[3] = {2} pred[4] = {3} pred[5] = {4} pred[6] = {5} succ[1] = {2} succ[2] = {3} succ[3] = {4} succ[4] = {5} succ[5] = {6,2} succ[6] = {}

6 Cse322, Programming Languages and Compilers 6 6/14/2015 Liveness Analysis using Dataflow Working from the future to the past, we can determine the edges over which each variable is live. In the example: b is live on 2 → 3 and on 3 → 4. a is live from on 1 → 2, on 4 → 5, and on 5 → 2 (but not on 2 → 3 → 4). c is live throughout (including on entry → 1). We can see that two registers suffice to hold a, b and c.

7 Cse322, Programming Languages and Compilers 7 6/14/2015 Dataflow equations We can do liveness analysis (and many other analyses) via dataflow analysis. A node defines a variable if its corresponding statement assigns to it. A node uses a variable if its corresponding statement mentions that variable in an expression (e.g., on the rhs of assignment). –Recall our ML function varsOf

8 Cse322, Programming Languages and Compilers 8 6/14/2015 Definitions For any variable v define: –defV[v] = set of graph nodes that define v –useV[v] = set of graph nodes that use v Similarly, for any node n, define –defN[n] = set of variables defined by node n –useN[n] = set of variables used by node n

9 Cse322, Programming Languages and Compilers 9 6/14/2015 Example | 1 ▼.-------. | a = 0 | `-------’ | |.------. | | | 2 C ▼ |.-----------. | | b = a + 1 | | `-----------’ | | | 3 ▼ |.-----------. | | c = c + b | | `-----------’ | | | 4 ▼ |.-----------. | | a = b * 2 | | `-----------’ | | | 5 ▼ |.-------. | | a < N | | `-------’ | | T | | F | `-------’ | 6 ▼.----------. | return c | `----------’ defV[a] = {1,4} defV[b] = {2} defV[c] = {?,3} useV[a] = {2,5} useV[b] = {3,4} useV[c] = {3,6} defN[1] = {a} defN[2] = {b} defN[3] = {c} defN[4] = {a} defN[5] = {} defN[6] = {} useN[1] = {} useN[2] = {a} useN[3] = {c,b} useN[4] = {b} useN[5] = {a} useN[6] = {c}

10 Cse322, Programming Languages and Compilers 10 6/14/2015 Setting up equations –A variable is live on an edge if there is a directed path from that edge to a use of the variable that does not go through any def. –A variable is live-in at a node if it is live on any in-edge of that node; –It is live-out if it is live on any out-edge. Then the following equations hold live-in[n] = useN[n] U (live-out[n] – defN[n]) live-out[n] = U s  succ(n) live-in[s]

11 Cse322, Programming Languages and Compilers 11 6/14/2015 Computing We want the least fixed point of these equations: the smallest live-in and live-out sets such that the equations hold. We can find this solution by iteration: –Start with empty sets for live-in and live-out –Use equations to add variables to sets, one node at a time. –Repeat until sets don't change any more. Adding additional variables to the sets is safe, as long as the sets still obey the equations, but inaccurately suggests that more live variables exist than actually do.

12 Cse322, Programming Languages and Compilers 12 6/14/2015 The Problem We want to compute: live-in and live-out We know: by using: live-in[n] = useN[n] U (live-out[n] – defN[n]) live-out[n] = U s  succ(n) live-in[s] defN[1] = {a} defN[2] = {b} defN[3] = {c} defN[4] = {a} defN[5] = {} defN[6] = {} useN[1] = {} useN[2] = {a} useN[3] = {c,b} useN[4] = {b} useN[5] = {a} useN[6] = {c} succ[1] = {2} succ[2] = {3} succ[3] = {4} succ[4] = {5} succ[5] = {6,2} succ[6] = {}

13 Cse322, Programming Languages and Compilers 13 6/14/2015 Example live-in[n] = useN[n] U (live-out[n] – defN[n]) live-out[n] = U s  succ(n) live-in[s] defN[1] = {a} defN[2] = {b} defN[3] = {c} defN[4] = {a} defN[5] = {} defN[6] = {} useN[1] = {} useN[2] = {a} useN[3] = {c,b} useN[4] = {b} useN[5] = {a} useN[6] = {c} succ[1] = {2} succ[2] = {3} succ[3] = {4} succ[4] = {5} succ[5] = {6,2} succ[6] = {} Lets do node 5 live-out[5] = { }live-in[5]={ } live-out[5] = U s  {6,2} live-in[s] so now we need to do live-in[6] and live-in[2]

14 Cse322, Programming Languages and Compilers 14 6/14/2015 Solution For correctness, order in which we take nodes doesn't matter, but it turns out to be fastest to take them in roughly reverse order: – live-in[n] = use[n] U (live-out[n] – def[n]) – live-out[n] = U s  succ(n) live-in[s] nodeuse def1 st out in 2 nd out in 3 rd out in 6c c c c 5ac acac 4b aac bc 3bc bbc 2a bbc ac 1 aac c

15 Cse322, Programming Languages and Compilers 15 6/14/2015 Implementation issues Algorithm always terminates, because each iteration must enlarge at least one set, but sets are limited in size (by total number of variables). Time complexity is O(N 4 ) worst-case, but between O(N) and O(N 2 ) in practice. Typically do analysis using entire basic blocks as nodes. Can compute liveness for all variables in parallel (as here) or independently for each variable, on demand. Sets can be represented as bit vectors or linked lists; best choice depends on set density.

16 Cse322, Programming Languages and Compilers 16 6/14/2015 ML code First we need operations over sets –union –setMinus –normalization fun union [] ys = ys | union (x::xs) ys = if List.exists (fn z => z=x) ys then union xs ys else x :: (union xs ys)

17 Cse322, Programming Languages and Compilers 17 6/14/2015 SetMinus fun remove x [] = [] | remove x (y::ys) = if x=y then ys else y :: remove x ys; fun setMinus xs [] = xs | setMinus xs (y::ys) = setMinus (remove y xs) ys

18 Cse322, Programming Languages and Compilers 18 6/14/2015 Normalization fun sort' comp [] ans = ans | sort' comp [x] ans = x :: ans | sort' comp (x::xs) ans = let fun LE x y = case comp(x,y) of GREATER => false | _ => true fun GT x y = case comp(x,y) of GREATER => true | _ => false val small = List.filter (GT x) xs val big = List.filter (LE x) xs in sort' comp small (x::(sort' comp big ans)) end; fun nub [] = [] | nub [x] = [x] | nub (x::y::xs) = if x=y then nub (x::xs) else x::(nub (y::xs)); fun norm x = nub (sort' String.compare x [])

19 Cse322, Programming Languages and Compilers 19 6/14/2015 liveness algorithm fun computeInOut succ defN useN live_in live_out range = let open Array fun out n = let val nexts = sub(succ,n) fun getLive x = sub(live_in,x) val listOflists = map getLive nexts val all = norm(List.concat listOflists) in update(live_out,n,all) end fun inF n = let val ans = union (sub(useN,n)) (setMinus (sub(live_out,n)) (sub(defN,n))) in update(live_in,n,norm ans) end fun run i = (out i; inF i) in map run range end; Array access functions x[i] == sub(x,i) x[i] = e == update(x,i,e)

20 Cse322, Programming Languages and Compilers 20 6/14/2015 val it = [|[],[],[],[],[],[],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a"],["b","c"],["b"],[],["c"]|] val it = [|[],[],[],[],[],["a"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a"],["b","c"],["b"],[],["c"]|] val it = [|[],["a"],["b","c"],["b"],[],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a","c"],["b","c"],["b"],["a","c"],["c"]|] val it = [|[],["a"],["b","c"],["b"],[],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],[],["a","c"],["b","c"],["b"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b"],["a","c"],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],["c"],["a","c"],["b","c"],["b","c"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b"],["a","c"],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],["c"],["a","c"],["b","c"],["b","c"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b","c"],["a","c"],["a","c"],[]|] - computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]; val it = [|[],["c"],["a","c"],["b","c"],["b","c"],["a","c"],["c"]|] val it = [|[],["a","c"],["b","c"],["b","c"],["a","c"],["a","c"],[]|]

21 Cse322, Programming Languages and Compilers 21 6/14/2015 Fixed point algorithm Repeat computeInOut until live_in and live_out remain unchanged after a full iteration. The comparison is expensive. Since we never subtract any thing from one of these arrays, we need only detect when we assign a value to a particular index that is different from the one already there. A full iteration with no changes, means we’ve reached a fixpoint. fun change (array,index,value) = let val old = Array.sub(array,index) in Array.update(array,index,value) ; Bool.not(old=value) end; Returns true only if we’ve made a change

22 Cse322, Programming Languages and Compilers 22 6/14/2015 Second try fun computeInOut succ defN useN live_in live_out range = let open Array fun out n = let val nexts = sub(succ,n) fun getLive x = sub(live_in,x) val listOflists = map getLive nexts val all = norm(List.concat listOflists) in change(live_out,n,all) end fun inF n = let val ans = union (sub(useN,n)) (setMinus (sub(live_out,n)) (sub(defN,n))) in change(live_in,n,norm ans) end fun run(i,change) = (out i orelse inF i orelse change) in List.foldr run false range end; returns true only if a change has been made iterates over all n and determines if any change. Note change starts at false.

23 Cse322, Programming Languages and Compilers 23 6/14/2015 Keep applying fun try succ defN useN = let val n = Array.length succ val live_in = Array.array(n,[]:string list) val live_out = Array.array(n,[]:string list) fun repeat () = if (computeInOut succ defN useN live_in live_out [6,5,4,3,2,1]) then repeat () else () in repeat(); (live_out,live_in) end;

24 Cse322, Programming Languages and Compilers 24 6/14/2015 Static vs Dynamic Liveness | 1 ▼.---------. | a = b*b | `---------' | 2 ▼.---------. | c = a+b | `---------' | 3 ▼.-----------. | c >= b ? | `-----------' / \ 4 ▼ 5 ▼.-----------..----------. | return a | | return c | `-----------' `----------' Consider the graph Is a live-out at node 2?

25 Cse322, Programming Languages and Compilers 25 6/14/2015 Some thoughts It depends on whether control flow ever reaches node 4. A smart compiler could answer no. A smarter compiler could answer similar questions about more complicated programs. But no compiler can ever always answer such questions correctly. This is a consequence of the uncomputability of the Halting Problem. So we must be content with static liveness, which talks about paths of control-flow edges, and is just a conservative approximation of dynamic liveness, which talks about actual execution paths.

26 Cse322, Programming Languages and Compilers 26 6/14/2015 The Halting Problem Theorem: There is no program H that takes as input any program P and its input X, and (without infinite-looping) returns true if P(X ) halts and false if P(X) infinite-loops. Proof: Suppose there were such an H. From it, construct the function F(Y) = if H(Y,Y) then (while true do ()) else 1 Now consider F(F). –If F(F) halts, then, by the definition of H, H(F,F) is true, so the then clause executes, so F(F) does not halt. –But, if F(F) loops forever, then H(F,F) is false, so the else clause is taken, so F(F) halts. –Hence F(F) halts if any only if it doesn't halt. Since we've reached a contradiction, the initial assumption is wrong: there can be no such H.

27 Cse322, Programming Languages and Compilers 27 6/14/2015 Consequence Corollary: No program H'(P,X,L) can tell, for any program P, input X, and label L within P, whether L is ever reached on an execution of P on X. Proof: If we had H', we could construct H. Consider a program transformation T that, from any program P constructs a new program by putting a label L at the end of the program, and changing every halt to goto L$. Then H(P,X) = H'(T(P),X,L).

28 Cse322, Programming Languages and Compilers 28 6/14/2015 Register Interference Graphs Mixing instruction selection and register allocation gets confusing; We need a more systematic way to look at the problem. Initially generate code assuming an infinite number of ``logical'' registers; calculate live ranges Live after instr. ld a,t0 ; a:t0 t0 ld b,t1 ; b:t1 t0 t1 sub t0,t1,t2 ; t:t2 t0 t2 ld c,t3 ; c:t3 t0 t2 t3 sub t0,t3,t4 ; u:t4 t2 t4 add t2,t4,t5 ; v:t5 t4 t5 add t5,t4,t6 ; d:t6 t6 st t6,d Build a register interference graph, which has –a node for each logical register. –an edge between two nodes if the corresponding registers are simultaneously live.

29 Cse322, Programming Languages and Compilers 29 6/14/2015 Example Live after instr. ld a,t0 ; a:t0 t0 ld b,t1 ; b:t1 t0 t1 sub t0,t1,t2 ; t:t2 t0 t2 ld c,t3 ; c:t3 t0 t2 t3 sub t0,t3,t4 ; u:t4 t2 t4 add t2,t4,t5 ; v:t5 t4 t5 add t5,t4,t6 ; d:t6 t6 st t6,d t0 t1 t2 t3 t4 t5 t6

30 Cse322, Programming Languages and Compilers 30 6/14/2015 A coloring of a graph is an assignment of colors to nodes such that no two connected nodes have the same color. (Like coloring a map, where nodes=countries and edges connect countries with common border. Suppose we have k physical registers available. Then aim is to color interference graph with k or fewer colors. This implies we can allocate logical registers to physical registers without spilling. In general case, determining whether a graph can be k-colored is hard (N.P. Complete, and hence probably exponential). But a simple heuristic will usually find a k-coloring if there is one. t0 t1 t2 t3 t4 t5 t6

31 Cse322, Programming Languages and Compilers 31 6/14/2015 Graph Coloring Heuristic 1. Choose a node with fewer than k neighbors. 2. Remove that node. Note that if we can color the resulting graph with k colors, we can also color the original graph, by giving the deleted node a color different from all its neighbors. Repeat until either –There are no nodes with fewer than k neighbors, in which case we must spill; –or –The graph is gone, in which case we can color the original graph by adding the deleted nodes back in one at a time and coloring them.

32 Cse322, Programming Languages and Compilers 32 6/14/2015 Example: Find a 3- coloring t0 t1 t2 t3 t4 t5 t0 t1 t2 t3 t0 t1 t2 t3 t4 remove 6 remove 5 remove 4 t0 t1 t2 t3 t4 t5 t6

33 Cse322, Programming Languages and Compilers 33 6/14/2015 t0 t1 t3 remove 2 t0 t1 t2 t3 t0 t3 remove 1 t0 remove 3 t0 t1 t2 t3 t4 t5 t6

34 Cse322, Programming Languages and Compilers 34 6/14/2015 There cannot be a 2-coloring (why not?). t0 t1 t2 t3 t4 t5 t6

35 Cse322, Programming Languages and Compilers 35 6/14/2015 More about Flow Graphs Nodes:basic blocks Edges:branches between blocks Example: factorial (1) f := 1 (2) i := 2 (3) if i>n then goto (7) (4) f := f*i (5) i := i+1 (6) goto (3) (7) return

36 Cse322, Programming Languages and Compilers 36 6/14/2015 Flow Graphs (cont.) (1) f:=1 (2) i:=2 (3) if i>n then goto (7) (#4) (4) f:=f*i (5) i:= i+1 (6) goto (3) (#2) (7) return #1 #2 #3 #4 2 3 4 1

37 Cse322, Programming Languages and Compilers 37 6/14/2015 Successor – j `succ` i if (i,j) is an edge in the flow graph Predecessor –inverse of successor Dominator – i `dominates` j if i is on every path from 1 (the initial node) to j Immediate dominator –i `idom` j if i `dom` j and there is no other node k such that – k `idom` k and k `dom` j. Flow Graph Relations 2 3 4 1 2 `succ` 1, 1 `pred` 2, 3 `succ` 2, etc 1 `dom` 2, 1 `dom` 3, 1 `dom` 4; 2 `dom` 3, 2 `dom` 4 1 `idom` 2; 2 `idom` 3, 2 `idom` 4

38 Cse322, Programming Languages and Compilers 38 6/14/2015 Flow Graph Relations (cont.) Each node has a unique immediate dominator If i ` idom` k then for all m, (m ` dom` k) => m ` dom` i Consequence: –there exists a dominator tree (with the same nodes as the flow graph, but different edges) where each node in the dominator tree has an out-edge only to the nodes it immediately dominates. 2 3 4 1 3 2 4 1 flow graph dominator tree

39 Cse322, Programming Languages and Compilers 39 6/14/2015 Flow Graph Relations (cont.) The dominator tree edges are not necessarily flow graph edges: 1 2 3 4 1 2 3 4 original flow graph dominator tree note: every path from 1 to 4 must go through 1, but can go through either 2 or 3. So 2 & 3 do not immediately dominate 4.

40 Cse322, Programming Languages and Compilers 40 6/14/2015 Flow Graph Relations (cont.) Flow graph application: finding loops. A edge from B to A (in the flow graph) is a back-edge iff A dominates B (i.e. exists a path from A to B in dominator tree). If we remove all back-edges, only forward edges remain. If this graph has no cycles (i.e. it’s a Dag) then the original flow graph is known as a a reducible graph. In a reducible graph: –every loop contains a back edge –there are no jumps from outside into the middle of the loop 2 1 3 Non-reducible graph (rare; must use goto):

41 Cse322, Programming Languages and Compilers 41 6/14/2015 Next time Next time we’ll use flow graphs to implmement some more optimizations.


Download ppt "Cse322, Programming Languages and Compilers 1 6/14/2015 Lecture #13, May 15, 2007 Control flow graphs, Liveness using data flow, dataflow equations, Using."

Similar presentations


Ads by Google