Download presentation

Presentation is loading. Please wait.

Published byApril Kingsland Modified about 1 year ago

1
Under-approximating to Over-approximate Invisible Invariants and Abstract Interpretation Ken McMillan Microsoft Research TexPoint fonts used in EMF: A A A A A Lenore Zuck University of Illinois Chicago

2
Overview For some abstract domains, computation of the best abstract transformer can be very costly –(Indexed) Predicate Abstraction –Canonical shape graphs In these cases we may under-approximate the best transformer using finite-state methods, by restricting to a representative finite subset of the state space. –In practice, this can be a close under-approximation or even yield the exact abstract least fixed point –In some cases, finite-state under-approximations can yield orders-of- magnitude run-time reductions by reducing evaluation of the true abstract transformer. In this talk, we'll consider some generic strategies of this type, suggested by Pnueli and Zuck's Invisible Invariants method (viewed as Abstract Interpretation).

3
Parameterized Systems Suppose we have a parallel composition of N (finite state) processes, where N is unknown P1P1P1P1 P2P2P2P2 P3P3P3P3 PNPNPNPN... Proofs require auxiliary constructs, parameterized on N – –For safety, an inductive invariant – –For liveness, say, a ranking Pnueli, et al., 2001: derive these constructs for general N by abstracting from the mechanical proof of a particular N. – –Surprising practical result: under-approximations can yield over- approximations at the fixed point.

4
Recipe for an invariant 1. Compute the reachable states R N for fixed N (say, N=5) ● ● ● ● ● 2. Project onto a small subset of processes (say 2) processes (say 2) ● ● = {(s 1,s 2 ) | 9 (s 1,s 2,...) 2 R N }

5
Recipe for an invariant 3. Generalize from 2 to N, to get G N 2. Project onto a small subset of processes (say 2) processes (say 2) ● ● ● ● ● N = {(s 1,s 2 ) | 9 (s 1,s 2,...) 2 R N } N G N = Æ i j 2 [1..N] (s i,s j ) 4. Test whether G N is inductive invariant for all N 8 N. G N ) X G N ● ● ●

6
Checking inductiveness Inductiveness is equivalent to validity of this formula: G N Æ T ) G’ N Transition relation Small model theorem: – –If there is a countermodel with N>M, there is a countermodel with N=M – –Suffices to check inductiveness for N · M In this case both the invariant generation and invariant checking amount to finite-state model checking. If no small model result is available, however, we can rely on a theorem prover to check inductiveness.

7
Abstraction setting Concrete state space Abstract language preserves conjunctions s) = Æ { 2 L | s µ ( ) } concrete transformer

8
Parameterized systems Example: 8 i,j: i j ) : (q[i] Æ q[j]) Small model results: – –M depends mainly on quantifier structure of G N and T – –Example: if T has one universal and G N has two, then M = 2b+3

9
Invariant by AI Abstract transformer # = lfp Compute strongest inductive invariant in L is difficult to compute (exponential TP calls) (exponential TP calls) For our abstraction, this computation can be quite expensive!

10
Restricted abstraction Abstract language Galois connection: "project" "generalize" computable!

11
Invisible invariant construction We construct the invariant guess by reachability and abstraction NNNN NNNN NNNN NNNN Testing the invariant guess ¶ NNNN ¶ NNNN SMT if N >= M NNNN = lfp t#t#t#t# RNRNRNRN GNGNGNGN

12
Under-approximation The idea of generalizing from finite instances suggests we can under- approximate the best abstract transformer # t#t#t#t# NNNN NNNN NNNN t#Nt#Nt#Nt#N

13
Three methods A NNNN NNNN NNNN NNNN C NNNN NNNN NNNN B NNNN NNNN NNNN NNNN NNNN NNNN =? ... NNNN NNNN lfp=?...

14
Experiments

15
Related Work Static analysis with finite domains can replace very substantial hand proof efforts. replace very substantial hand proof efforts.

16
Conclusion Invisible invariants suggest a general approach to minimizing computation of the best transformer, based on two ideas: –Under-approximations can yield over-approximations at the fixed point This is a bit mysterious, but observationally true –Computing the fixed point with under-approximations can use more light- weight methods For example, BDD-based model checking instead of a theorem prover Using under-approximations can reduce the number of theorem prover calls to just one in the best case. We can apply this idea whenever we can define finite sub-spaces that are representative of the whole space. –Parametricity and symmetry are not required –For example, could be applied to heap-manipulating programs by bounding the heap size.

17
Example: Peterson ME N-process version from Pnueli et al

18
Peterson invariant Hand-made invariant for N-process Peterson (m.ZERO m.in(m.last(m.in(i))) = m.in(i)) & (m.in(i) = m.in(j) & m.ZERO m.in(m.last(l)) = l) & (m.pc(i) = L4 => (m.last(m.in(i)) != i | m.in(j) < m.in(i))) & ((m.pc(i) = L5 | m.pc(i) = L6) => m.in(i) = m.N) & ((m.pc(i) = L0 | m.pc(i) = L1) => m.in(i) = m.ZERO) & (m.pc(i) = L2 => m.in(i) > m.ZERO) & ((m.pc(i) = L3 | m.pc(i) = L4) => m.in(i) m.ZERO) & (~(m.in(i) = m.N & m.in(j) = m.N)) Required a few hours of trial-and error with a theorem prover

19
Peterson Invariant (cont.) Machine generated by TLV in 6.8 seconds X18 := ~levlty1 & y1ltN & ~y1eqN & ~y2eqN & ~y1gtz & y1eqz & (~ysy1eqy1 => ~sy1eq1); X15 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysy1eqy1; X5 := (~levlty1 => y1ltN & X15); X1 := ysy1eqy1 & ~sy1eq1; X0 := ysy1eqy1 & sy1eq1; X16 := y1eqN & y2eqN & y1gtz & ~y1eqz & ysleveqlev & X0; X14 := y1eqN & y2eqN & y1gtz & ~y1eqz & X0; X13 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & (ysleveqlev => ysy1eqy1) & (~ysleveqlev => X0); X7 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & ysy1eqy1) & X5; X6 := ~y1eqy2 & X7; X4 := (levlty1 => y1ltN & X13) & X5; X3 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & X1) & (~levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & X1); X2 := ~y1eqy2 & X3; X17 := (levlty1 => (y1ltN => X13) & (~y1ltN => X14)) & (~levlty1 => (y1ltN => X15) & (~y1ltN => X16)); X12 := (y1eqy2 => X7); X11 := (y1lty2 => X6); X10 := y1lty2 & X6; X9 := ~y1lty2 & ~y1eqy2 & X4; X8 := (~y1eqy2 => X4); matrix := ((loc1 = L5 | loc1 = L6) => (loc2 = L0 | loc2 = L1 | loc2 = L2 | loc2 = L3 | loc2 = L4) & ~y1lty2 & ~y1eqy2 & (levlty1 => ~y1ltN & X14) & (~levlty1 => ~y1ltN & X16)) & (loc1 = L4 => ((loc2 = L5 | loc2 = L6) => y1lty2 & X2) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => (y1lty2 => X2) & (~y1lty2 => (y1eqy2 => X3) & X8)) & ((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L3 => ((loc2 = L5 | loc2 = L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & X8)) & ((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L2 => ((loc2 = L5 | loc2 = L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & (~y1eqy2 => X17))) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 & ~y1eqy2 & X17)) & ((loc1 = L0 | loc1 = L1) => (~(loc2 = L1 | loc2 = L0) => y1lty2 & ~y1eqy2 & X18) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 & y1eqy2 & X18));

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google