Download presentation
Presentation is loading. Please wait.
1
From last lecture We want to find a fixed point of F, that is to say a map m such that m = F(m) Define ?, which is ? lifted to be a map: ? = e. ? Compute F( ? ), then F(F( ? )), then F(F(F( ? ))),... until the result doesn’t change anymore
2
From last lecture If F is monotonic and height of lattice is finite: iterative algorithm terminates If F is monotonic, the fixed point we find is the least fixed point.
3
What about if we start at top? What if we start with > : F( > ), F(F( > )), F(F(F( > ))) We get the greatest fixed point Why do we prefer the least fixed point? –More precise
4
Graphically x y 10
5
Graphically x y 10
6
Graphically x y 10
7
Graphically, another way
8
Another example: constant prop Set D = x := N in out F x := n (in) = x := y op z in out F x := y op z (in) =
9
Another example: constant prop Set D = 2 { x ! N | x 2 Vars Æ N 2 Z } x := N in out F x := n (in) = in – { x ! * } [ { x ! N } x := y op z in out F x := y op z (in) = in – { x ! * } [ { x ! N | ( y ! N 1 ) 2 in Æ ( z ! N 2 ) 2 in Æ N = N 1 op N 2 }
10
Another example: constant prop *x := y in out F *x := y (in) = x := *y in out F x := *y (in) =
11
Another example: constant prop *x := y in out F *x := y (in) = in – { z ! * | z 2 may-point(x) } [ { z ! N | z 2 must-point-to(x) Æ y ! N 2 in } [ { z ! N | (y ! N) 2 in Æ (z ! N) 2 in } x := *y in out F x := *y (in) = in – { x ! * } [ { x ! N | 8 z 2 may-point-to(x). (z ! N) 2 in }
12
Another example: constant prop x := f(...) in out F x := f(...) (in) = *x := *y + *z in out F *x := *y + *z (in) =
13
Another example: constant prop x := f(...) in out F x := f(...) (in) = ; *x := *y + *z in out F *x := *y + *z (in) = F a := *y;b := *z;c := a + b; *x := c (in)
14
Another example: constant prop s: if (...) in out[0]out[1] merge out in[0]in[1]
15
Lattice (D, v, ?, >, t, u ) =
16
Lattice (D, v, ?, >, t, u ) = (2 { x ! N | x 2 Vars Æ N 2 Z }, w, D, ;, u, t )
17
Example x := 5 v := 2 x := x + 1 w := v + 1 w := 3 y := x * 2 z := y + 5 w := w * v
18
Better lattice Suppose we only had one variable
19
Better lattice Suppose we only had one variable D = { ?, > } [ Z 8 i 2 Z. ? v i Æ i v > height = 3
20
For all variables Two possibilities Option 1: Tuple of lattices Given lattices (D 1, v 1, ? 1, > 1, t 1, u 1 )... (D n, v n, ? n, > n, t n, u n ) create: tuple lattice D n =
21
For all variables Two possibilities Option 1: Tuple of lattices Given lattices (D 1, v 1, ? 1, > 1, t 1, u 1 )... (D n, v n, ? n, > n, t n, u n ) create: tuple lattice D n = ((D 1 £... £ D n ), v, ?, >, t, u ) where ? = ( ? 1,..., ? n ) > = ( > 1,..., > n ) (a 1,..., a n ) t (b 1,..., b n ) = (a 1 t 1 b 1,..., a n t n b n ) (a 1,..., a n ) u (b 1,..., b n ) = (a 1 u 1 b 1,..., a n u n b n ) height = height(D 1 ) +... + height(D n )
22
For all variables Option 2: Map from variables to single lattice Given lattice (D, v, ?, >, t, u ) and a set V, create: map lattice V ! D = (V ! D, v, ?, >, t, u )
23
Back to example x := y op z in out F x := y op z (in) =
24
Back to example x := y op z in out F x := y op z (in) = in [ x ! in(y) op in(z) ] where a op b =
25
General approach to domain design Simple lattices: –boolean logic lattice –powerset lattice –incomparable set: set of incomparable values, plus top and bottom (eg const prop lattice) –two point lattice: just top and bottom Use combinators to create more complicated lattices –tuple lattice constructor –map lattice constructor
26
May vs Must Has to do with definition of computed info Set of x ! y must-point-to pairs –if we compute x ! y, then, then during program execution, x must point to y Set of x ! y may-point-to pairs –if during program execution, it is possible for x to point to y, then we must compute x ! y
27
May vs must MayMust most conservative (bottom) most optimistic (top) safe merge
28
May vs must MayMust most conservative (bottom) empty setfull set most optimistic (top) full setempty set safeoverly bigoverly small merge [Å
29
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction In some cases, the constraints are of the form in = F(out) These are called backward problems. Example: live variables –compute the of variables that may be live
30
Example: live variables Set D = Lattice: (D, v, ?, >, t, u ) =
31
Example: live variables Set D = 2 Vars Lattice: (D, v, ?, >, t, u ) = (2 Vars, µ, ;,Vars, [, Å ) x := y op z in out F x := y op z (out) =
32
Example: live variables Set D = 2 Vars Lattice: (D, v, ?, >, t, u ) = (2 Vars, µ, ;,Vars, [, Å ) x := y op z in out F x := y op z (out) = out – { x } [ { y, z}
33
Example: live variables x := 5 y := x + 2 x := x + 1 y := x + 10... y...
34
Example: live variables x := 5 y := x + 2 x := x + 1 y := x + 10... y...
35
Theory of backward analyses Can formalize backward analyses in two ways Option 1: reverse flow graph, and then run forward problem Option 2: re-develop the theory, but in the backward direction
36
Precision Going back to constant prop, in what cases would we lose precision?
37
Precision Going back to constant prop, in what cases would we lose precision? if (p) { x := 5; } else x := 4; }... if (p) { y := x + 1 } else { y := x + 2 }... y... if (...) { x := -1; } else x := 1; } y := x * x;... y... x := 5 if ( ) { x := 6 }... x... where is equiv to false
38
Precision The first problem: Unreachable code –solution: run unreachable code removal before –the unreachable code removal analysis will do its best, but may not remove all unreachable code The other two problems are path-sensitivity issues –Branch correlations: some paths are infeasible –Path merging: can lead to loss of precision
39
MOP: meet over all paths Information computed at a given point is the meet of the information computed by each path to the program point if (...) { x := -1; } else x := 1; } y := x * x;... y...
40
MOP For a path p, which is a sequence of statements [s 1,..., s n ], define: F p (in) = F s n (...F s 1 (in)... ) In other words: F p = Given an edge e, let paths-to(e) be the (possibly infinite) set of paths that lead to e Given an edge e, MOP(e) = For us, should be called JOP...
41
MOP vs. dataflow As we saw in our example, in general, MOP dataflow In what cases is MOP the same as dataflow? x := -1; y := x * x;... y... x := 1; y := x * x;... y...
42
MOP vs. dataflow As we saw in our example, in general, MOP dataflow In what cases is MOP the same as dataflow? Distributive problems. A problem is distributive if: 8 a, b. F(a t b) = F(a) t F(b)
43
Summary of precision Dataflow is the basic algorithm To basic dataflow, we can add path-separation –Get MOP, which is same as dataflow for distributive problems –Variety of research efforts to get closer to MOP for non- distributive problems To basic dataflow, we can add path-pruning –Get branch correlation –Will see example of this later in the course To basic dataflow, can add both: –meet over all feasible paths
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.