Download presentation

1
**Ken McMillan Microsoft Research**

Under-approximating to Over-approximate Invisible Invariants and Abstract Interpretation Lenore Zuck University of Illinois Chicago Ken McMillan Microsoft Research TexPoint fonts used in EMF: AAAAA

2
Overview For some abstract domains, computation of the best abstract transformer can be very costly (Indexed) Predicate Abstraction Canonical shape graphs In these cases we may under-approximate the best transformer using finite-state methods, by restricting to a representative finite subset of the state space. In practice, this can be a close under-approximation or even yield the exact abstract least fixed point In some cases, finite-state under-approximations can yield orders-of-magnitude run-time reductions by reducing evaluation of the true abstract transformer. In this talk, we'll consider some generic strategies of this type, suggested by Pnueli and Zuck's Invisible Invariants method (viewed as Abstract Interpretation).

3
**Parameterized Systems**

Suppose we have a parallel composition of N (finite state) processes, where N is unknown P1 P2 P3 PN ... Proofs require auxiliary constructs, parameterized on N For safety, an inductive invariant For liveness, say, a ranking Pnueli, et al., 2001: derive these constructs for general N by abstracting from the mechanical proof of a particular N. Surprising practical result: under-approximations can yield over-approximations at the fixed point.

4
**Recipe for an invariant**

1. Compute the reachable states RN for fixed N (say, N=5) ● ● ● ● ● 2. Project onto a small subset of processes (say 2) ● ● = {(s1,s2) | 9 (s1,s2,...) 2 RN}

5
**Recipe for an invariant**

2. Project onto a small subset of processes (say 2) ● ● = {(s1,s2) | 9 (s1,s2,...) 2 RN} 3. Generalize from 2 to N, to get GN N N ● ● ● ● ● ● GN = Æ i j 2 [1..N] (si,sj) ● ● ● ● ● ● ● ● ● ● ● ● ... ... 4. Test whether GN is inductive invariant for all N 8 N. GN ) X GN

6
**Checking inductiveness**

Inductiveness is equivalent to validity of this formula: GN Æ T ) G’N Transition relation Small model theorem: If there is a countermodel with N>M, there is a countermodel with N=M Suffices to check inductiveness for N·M In this case both the invariant generation and invariant checking amount to finite-state model checking. If no small model result is available, however, we can rely on a theorem prover to check inductiveness.

7
**preserves conjunctions**

Abstraction setting 𝐿 Abstract language 𝛾:𝐿→ 2 𝑆 preserves conjunctions 𝛼: 2 𝑆 →𝐿 (s) = Æ { 2 L | s µ () } 𝑆 𝜏 concrete transformer Concrete state space

8
**Parameterized systems**

Concrete state space 𝑆 is set of valuations of variables Special variable 𝑵: Nat represents system parameter Ranges of other variables depend on 𝑁. Example: 𝑎:1…𝑵→1…𝑵 For fixed 𝑁, all ranges are finite: 𝑆 𝑁 = {𝑠∈𝑆 𝑠 𝑵 =𝑁 Concrete transition system defined in FOL: 𝜏 𝑠 =𝐼∪𝑇(𝑠) Abstract domain is Indexed Predicate Abstraction Fixed set of 𝐽 of index variables (say {𝑖,𝑗}) Fixed set of atomic predicates 𝑃. A matrix is a Boolean combination over 𝑃 (in some normal form) 𝐿 is the set of formulas ∀𝐽. 𝑀 where 𝑀 is a matrix. Example: 8 i,j: i j ) :(q[i] Æ q[j]) Small model results: M depends mainly on quantifier structure of GN and T Example: if T has one universal and GN has two, then M = 2b+3

9
**Invariant by AI Abstract transformer # 𝜏 #**

# is difficult to compute (exponential TP calls) Compute strongest inductive invariant in L = lfp 𝜏 # 𝜏 # 𝜏 # ⊥ For our abstraction, this computation can be quite expensive!

10
**Restricted abstraction**

𝐿 Abstract language Restricted concretization: 𝛾 𝑁 𝜙 =𝛾 𝜙 ∩ 𝑆 𝑁 𝛼 𝑁 𝑠 =𝛼(𝑠) 𝛾:𝐿→ 2 𝑆 𝛼: 2 𝑆 →𝐿 𝛾 𝑁 :𝐿→ 2 𝑆 𝑁 𝛼 𝑁 : 2 𝑆 𝑁 →𝐿 "generalize" "project" Galois connection: 2 𝑆 𝑁 ⇋ 𝐿 𝛾 𝑁 𝛼 𝑁 computable! 𝑆 Concrete state space 𝑆 is a union of finite spaces 𝑆 1 … for each value of 𝑁 𝑆 1 𝑆 2 𝑆 3 ... 𝑆 𝑁 𝜏 𝑁 : 2 𝑆 𝑁 → 2 𝑆 𝑁

11
**Invisible invariant construction**

We construct the invariant guess by reachability and abstraction = lfp N 𝜋N t# N N N N RN Testing the invariant guess 𝜋N N N 𝜋N SMT if N >= M GN

12
Under-approximation The idea of generalizing from finite instances suggests we can under-approximate the best abstract transformer # t# N N N t#N 𝜏 𝑁 # is an under-approximation of 𝜏 # that we can compute with finite-state methods.

13
**Three methods A B C 𝜏 # 𝜏 # 𝜏 # lfp( 𝜏 # ) **

=? 𝜏 𝑁 # 𝜏 𝑁 # lfp( 𝜏 𝑁 # ) N N N 𝜏 # B N N N N N N ... =? lfp( 𝜏 𝑁 # ) 𝜏 𝑁 𝛾 𝑁 𝛼 𝑁 𝜏 𝑁 # C N N N ... N N N lfp

14
**Experiments Evaluate three approaches Strategy C wins in two cases**

Strategy A: compute 𝜏 # using UCLID PA (eager reduction to ALL-SAT) Strategy B: compute 𝜏 𝑁 # by TLV with BDD's and 𝜏 # using UCLID Strategy C: compute 𝜏 𝑁 and 𝜏 𝑁 # by TLV and 𝜏 # using UCLID Strategy C wins in two cases Fewer computations of 𝛼 𝑁 and 𝛾 𝑁 Strategy B wins in one case More abstraction reduces BDD size and iterations In all cases, only one theorem prover call is needed.

15
**Related Work Yorsh, Ball, Sagiv 2006 Bingham 2008**

Combines testing and abstract interpretation Does not compute abstract fixed-points for finite sub-spaces as here Here we apply model checking aggressively to reduce 𝜏 # computation Bingham 2008 Essentially applies Strategy B with small model theorem to verify FLASH cache coherence protocol Compare to interactive proof with PVS with 111 lemmas (776 lines) and extensive proof script! Static analysis with finite domains can replace very substantial hand proof efforts.

16
Conclusion Invisible invariants suggest a general approach to minimizing computation of the best transformer, based on two ideas: Under-approximations can yield over-approximations at the fixed point This is a bit mysterious, but observationally true Computing the fixed point with under-approximations can use more light-weight methods For example, BDD-based model checking instead of a theorem prover Using under-approximations can reduce the number of theorem prover calls to just one in the best case. We can apply this idea whenever we can define finite sub-spaces that are representative of the whole space. Parametricity and symmetry are not required For example, could be applied to heap-manipulating programs by bounding the heap size.

17
**Example: Peterson ME N-process version from Pnueli et al. 2001. 𝑵 : 1…**

pc : 1…𝑵→ 𝑙 0 … 𝑙 6 in : 1…𝑁→0…𝑁 last : 1…𝑁→0…𝑁 Initially ∀𝑖∈1…𝑁 𝑖𝑛 𝑖 =0∧𝑙𝑎𝑠𝑡 𝑖 =0∧𝑝𝑐 𝑖 =𝑙_0 in 𝑙 0 : <non-critical>; goto 𝑙 1 𝑙 1 : 𝑖𝑛 𝑖 ,𝑙𝑎𝑠𝑡 𝑖 ≔ 1,𝑖 ; goto 𝑙 2 𝑙 2 : if 𝑖𝑛 𝑖 <𝑁 goto 𝑙 3 else goto 𝑙 5 𝑙 3 : if ∀𝑗≠𝑖. 𝑖𝑛 𝑗 <𝑖𝑛 𝑖 ∨𝑙𝑎𝑠𝑡 𝑖𝑛 𝑖 ≠𝑖 then goto 𝑙 4 else goto 𝑙 3 𝑙 4 : 𝑖𝑛 𝑖 ≔𝑖𝑛 𝑖 +1;𝑙𝑎𝑠𝑡 𝑖𝑛 𝑖 ≔𝑖; goto 𝑙 2 𝑙 5 : <Critical>; goto 𝑙 6 𝑙 6 : 𝑖𝑛 𝑖 ≔0; goto 𝑙 0 || 𝑖=1…𝑁

18
**Peterson invariant Hand-made invariant for N-process Peterson**

(m.ZERO < m.in(i) & m.in(i) < m.N => m.in(m.last(m.in(i))) = m.in(i)) & (m.in(i) = m.in(j) & m.ZERO < l & l < m.in(i) => m.in(m.last(l)) = l) & (m.pc(i) = L4 => (m.last(m.in(i)) != i | m.in(j) < m.in(i))) & ((m.pc(i) = L5 | m.pc(i) = L6) => m.in(i) = m.N) & ((m.pc(i) = L0 | m.pc(i) = L1) => m.in(i) = m.ZERO) & (m.pc(i) = L2 => m.in(i) > m.ZERO) & ((m.pc(i) = L3 | m.pc(i) = L4) => m.in(i) < m.N & m.in(i) > m.ZERO) & (~(m.in(i) = m.N & m.in(j) = m.N)) Required a few hours of trial-and error with a theorem prover

19
**Peterson Invariant (cont.)**

Machine generated by TLV in 6.8 seconds X18 := ~levlty1 & y1ltN & ~y1eqN & ~y2eqN & ~y1gtz & y1eqz & (~ysy1eqy1 => ~sy1eq1); X15 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysy1eqy1; X5 := (~levlty1 => y1ltN & X15); X1 := ysy1eqy1 & ~sy1eq1; X0 := ysy1eqy1 & sy1eq1; X16 := y1eqN & y2eqN & y1gtz & ~y1eqz & ysleveqlev & X0; X14 := y1eqN & y2eqN & y1gtz & ~y1eqz & X0; X13 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & (ysleveqlev => ysy1eqy1) & (~ysleveqlev => X0); X7 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & ysy1eqy1) & X5; X6 := ~y1eqy2 & X7; X4 := (levlty1 => y1ltN & X13) & X5; X3 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & X1) & (~levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & X1); X2 := ~y1eqy2 & X3; X17 := (levlty1 => (y1ltN => X13) & (~y1ltN => X14)) & (~levlty1 => (y1ltN => X15) & (~y1ltN => X16)); X12 := (y1eqy2 => X7); X11 := (y1lty2 => X6); X10 := y1lty2 & X6; X9 := ~y1lty2 & ~y1eqy2 & X4; X8 := (~y1eqy2 => X4); matrix := ((loc1 = L5 | loc1 = L6) => (loc2 = L0 | loc2 = L1 | loc2 = L2 | loc2 = L3 | loc2 = L4) & ~y1lty2 & ~y1eqy2 & (levlty1 => ~y1ltN & X14) & (~levlty1 => ~y1ltN & X16)) & (loc1 = L4 => ((loc2 = L5 | loc2 = L6) => y1lty2 & X2) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => (y1lty2 => X2) & (~y1lty2 => (y1eqy2 => X3) & X8)) & ((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L3 => ((loc2 = L5 | loc2 = L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & X8)) & ((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L2 => ((loc2 = L5 | loc2 = L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & (~y1eqy2 => X17))) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 & ~y1eqy2 & X17)) & ((loc1 = L0 | loc1 = L1) => (~(loc2 = L1 | loc2 = L0) => y1lty2 & ~y1eqy2 & X18) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 & y1eqy2 & X18));

Similar presentations

OK

Prof. Aiken CS 294 Lecture 21 Abstract Interpretation Part 2.

Prof. Aiken CS 294 Lecture 21 Abstract Interpretation Part 2.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To ensure the functioning of the site, we use **cookies**. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy & Terms.
Your consent to our cookies if you continue to use this website.

Ads by Google

Ppt on review writing software Ppt on nutrition in plants Ppt on famous business personalities of india Ppt on random access memory Ppt on presidential election in india Ppt on ip address classes hosts Ppt on management by objectives mbo Ppt on total parenteral nutrition ppt Ppt on history of touchscreen technology Ppt on deforestation in india-2010