Download presentation

Presentation is loading. Please wait.

Published byPatience Leetch Modified over 2 years ago

1
Interpolation and Widening Ken McMillan Microsoft Research TexPoint fonts used in EMF: A A A A A

2
Interpolation and Widening Widening/Narrowing and Craig Interpolation are two approaches to computing inductive invariants of transition systems. Both are essentially methods of generalizing from proofs about bounded executions to proofs about unbounded executions. In this talk, we'll consider the relationship between these two approaches, from both theoretical and practical points of view. Consider only property proving applications, since interpolation only applies with a property to prove.

3
Intuitive comparison stronger weaker iterations... lfp stronger weaker iterations... lfp inductivewidening/narrowing interpolation

4
Abstractions as proof systems We will view both widening/narrowing and interpolation as proof systems –In particular, local proof systems A proof system (or abstraction) consists of: –A logical language L (abstract domain) –A set of sound deduction rules A choice of proof system constitutes a bias, or domain knowledge –Rich proof system = weak bias –Impoverished proof system = strong bias By restricting the logical language and deduction rules, the analysis designer expresses a space of possible proofs in which the analysis tool should search.

5
Fundamental problems Relevance –We must avoid a combinatorial explosion of deductions –Thus, deduction must be restricted to facts relevant to the property Convergence –Eventually the proofs for bounded executions must generalize to a proof of unbounded executions.

6
Different approaches Widening/narrowing relies on a restricted proof system –Relevance is enforced by strong bias –Convergence is also enforced in this way, but proof of a property is not guaranteed Interpolation uses a rich proof system –Relevance is determined by Occam's razor relevant deductions occur in simple property proofs –Convergence is not guaranteed, but approached heuristically again using Occam's razor We will see that the two methods have many aspects in common, but take different approaches to these fundamental problems. In the interpolation approach, we rely on well-developed theorem proving approaches to search large spaces for simple proofs.

7
Proofs A proof is a series of deductions, from premises to conclusions Each deduction is an instance of an inference rule Usually, we represent a proof as a tree... P1P1P1P1 P2P2P2P2 P3P3P3P3 P4P4P4P4 P5P5P5P5 C Premises Conclusion P 1 P 2 C

8
Inference rules The inference rules depend on the theory we are reasoning in p _ : p _ _ _ _ _ Resolution rule: Boolean logic Linear arithmetic x1 · y1x1 · y1x1 · y1x1 · y1 x2 · y2x2 · y2x2 · y2x2 · y2 x 1 +x 2 · y 1 +y 2 Sum rule:

9
Invariants from unwindings A simple way to generalize from bounded to unbounded proofs: –Consider just one program execution path, as straight-line program –Construct a proof for this straight-line program –See if this proof contains an inductive invariant proving the property Example program: x = y = 0; while(*) x++; y++; while(x != 0) x--; y--; assert (y == 0); {x == y} invariant:

10
{x = 0 ^ y = 0} {x = y} {x = 0 ) y = 0} {False} {True} {y = 0} {y = 1} {y = 2} {y = 1} {y = 0} {False} {True} Unwind the loops Proof of inline program contains invariants for both loops Assertions may diverge as we unwind A practical method must somehow prevent this kind of divergence! x = y = 0; x++; y++; [x!=0]; x--; y--; [x!=0]; x--; y--; [x == 0] [y != 0] How can we find relevant proofs of program paths?

11
Interpolation Lemma Let A and B be first order formulas, using –some non-logical symbols (predicates, functions, constants) –the logical symbols ^, _, :, 9, 8, (),... If A B = false, there exists an interpolant A' for (A,B) such that: A A' A' ^ B = false A’ uses only common vocabulary of A and B [Craig,57] A p q B q r A’ = q

12
Interpolants as Floyd-Hoare proofs False x 1 =y 0 True y 1 >x 1 ) ) ) 1. Each formula implies the next 2. Each is over common symbols of prefix and suffix 3. Begins with true, ends with false Proving in-line programs SSA sequence Prover Interpolation Hoare Proof proof x=y; y++; [x=y] x 1 = y 0 y 1 =y 0 +1 x1y1x1y1 {False} {x=y} {True} {y>x} x = y y++ [x == y]

13
Local proofs and interpolants x=y; y++; [y · x] x 1 =y 0 y 1 =y 0 +1 y1·x1y1·x1 y0 · x1y0 · x1y0 · x1y0 · x1 x 1 +1 · y 1 y 1 · x 1 +1 y 1 · y 0 +1 1 · 0 FALSE x1 · y0x1 · y0x1 · y0x1 · y0 y 0 +1 · y 1 TRUE x 1 · y x 1 +1 · y 1 FALSE This is an example of a local proof...

14
Definition of local proof x 1 =y 0 y 1 =y 0 +1 y1·x1y1·x1 y0y0y0y0 scope of variable = range of frames it occurs in y1y1y1y1 x1x1x1x1 vocabulary of frame = set of variables “in scope” {x 1,y 0 } {x 1,y 0,y 1 } {x 1,y 1 } x 1 +1 · y 1 x1 · y0x1 · y0x1 · y0x1 · y0 y 0 +1 · y 1 deduction “in scope” here Local proof: Every deduction written Every deduction written in vocabulary of some in vocabulary of some frame. frame.

15
Forward local proof x 1 =y 0 y 1 =y 0 +1 y1·x1y1·x1 {x 1,x 0 } {x 1,y 0,y 1 } {x 1,y 1 } Forward local proof: each deduction can be assigned a frame such that all the deduction arrows go forward. x 1 +1 · y 1 1 · 0 FALSE x1 · y0x1 · y0x1 · y0x1 · y0 y 0 +1 · y 1 For a forward local proof, the (conjunction of) assertions crossing frame boundary is an interpolant. TRUE x 1 · y x 1 +1 · y 1 FALSE

16
Proofs and relevance x 1 =y 0 +1 z 1 =x 1 +1 x 1 · y 0 y 0 · z 1 {x 1,y 0 } {x 1,y 0,z 1 } TRUE x 1 = y 0 + 1 FALSE z 1 = 2 z 1 = y 0 2 1·01·01·01·0 FALSE x 1 = y 0 + 1 Æ z 1 = 2 x 1 = y 0 + 1 Æ z 1 = y 0 2 By dropping unneeded inferences, we weaken the interpolant and eliminate irrelevant predicates. 0 · 2 x 1 = y 0 + 1 Interpolants are neither weakest pre not strongest post.

17
Applying Occam's Razor Define a (local) proof system –Can contain whatever proof rules you want Define a cost metric for proofs –For example, number of distinct predicates after dropping subscripts Exhaustive search for lowest cost proof –May restrict to forward or reverse proofs x = e e/x] FALSE unsat. Allow simple arithmetic rewriting. Simple proofs are more likely to generalize Even this trivial proofs system allows useful flexibility

18
Loop example x 0 = 0 y 0 = 0 x 1 =x 0 +1 y 1 =y 0 +1 TRUE x 0 = 0 Æ y 0 = 0... x 1 =1 Æ y 1 = 1 x 2 =x 1 +1 y 2 =y 1 +1... x 1 = 1 y 1 = 1 x 2 = 2 y 2 = 2...... cost: 2N x 2 =2 Æ y 2 = 2 x 0 = y 0 x 1 = y 0 +1 x 1 = y 1 x 2 = y 1 +1 x 2 = y 2 TRUE x 0 = y 0... x 1 = y 1 cost: 2 x 2 = y 2 Lowest cost proof is simpler, avoids divergence.

19
Interpolation Generalize from bounded proofs to unbounded proofs Weak bias –Rich proof system (large space of proofs) –Apply Occam's razor (simple proofs more likely to generalize) Occam's razor is applied to –Avoid combinatorial explosion of deductions (relevance) –Eventually generalize to inductive proofs (convergence) Apply theorem proving technology to search large space of possible proofs for simple proofs –DPLL, SMT solvers, etc.

20
Widening operators this chain eventually stabilizes.

21
Upward iteration sequence The widening properties guaranteeThe widening properties guarantee –over-approximation –stabilization over-approximate eventually stable!... Narrowing similar but contracting

22
Widening as local deduction Since widening loses information, we can think of it as a deduction rule In fact, we may have several deduction rules at our disposal: abstract post join widen

23
{False} {True} Widening with octagons x = y = 0; x++; y++; [x!=0]; x--; y--; [x!=0]; x--; y--; [x == 0] [y != 0] Because we proved the property, we have computed an interpolant But note the irrelevant fact! Our proof rules are too coarse to eliminate this fact.

24
{True} Over-widening (with intervals) x = y = 0; x=1-x; y++; [x==2]; Note if we had waited on step to widen we would have a proof. {False}

25
Safe widening Let us define a safe widening sequence as one that ends in a safe state. Suppose we apply a sequence of rules and fail... We may postpone a widening to achieve a safety proof

26
Incompleteness Incomplete proof system on purpose We restrict the proof system (strong bias) to enforce –relevance focus –convergence These properties are obtained at the risk of over-widening Incompleteness derives only from incompleteness of underlying logic – –For example, in Presburger arithmetic we have completeness Relevance focus and convergence rely on general heuristics – –Occam's razor (simple proofs tend to generalize) – –Rely on theorem proving techniques – –Choice of logic and axioms also represents a weak bias Widening/narrowing Interpolation

27
Consequences of strong bias Widening requires domain knowledge, which entails a careful choice of the logical language L. –Octagons: easy –Unions of octagons: harder –Presburger arithmetic formulas: ??? This entails incompleteness, as a restricted language implies loss of information. This also means we can tailor the representation for efficiency. –Octagons: use half-space representation, not convex hull of vertices –Polyhedra: mixed representation

28
Advantages of weak bias Boolean logic (e.g., hardware verification) –Language L is Boolean circuits over system state variables –There is no obvious a priori widening for this language –Interpolation techniques are the most effective known for this problem McMillan CAV 2003 (feasible interpolation using SAT solvers) Bradley VMCAI 2011 (interpolation by local proof) –Note rapid convergence is very important here Infinite state cases requiring disjunctions –Hard to formula a widening a priori –Weak bias can be used to avoid combinatorial explosion of disjuncts Example: IMPACT Scaling to large number of variables –Weak bias can allow focus just on relevant variables Weak bias can be used in cases where domain knowledge is lacking.

29
Simple example for(i = 0; i < N; i++) a[i] = i; for(j = 0; j < N; j++) assert a[j] = j; { 8 x. 0 · x ^ x < i ) a[x] = x} invariant:

30
Partial Axiomatization Axioms of the theory of arrays (with select and update) 8 (A, I, V) (select(update(A,I,V), I) = V 8 (A,I,J,V) (I J ! select(update(A,I,V), J) = select(A,J)) Axioms for arithmetic [ integer axiom] etc... We use a (local) first-order superposition prover to generate interpolants, with a simple metric for proof complexity.

31
i = 0; [i < N]; a[i] = i; i++; [i < N]; a[i] = i; i++; [i >= N]; j = 0; [j < N]; j++; [j < N]; a[j] != j; Unwinding simple example Unwind the loops twice i 0 = 0 i 0 < N a 1 = update(a 0,i 0,i 0 ) i 1 = i 0 + 1 i 1 < N a 2 = update(a 1,i 1,i 1 ) i 2 = i+1 + 1 i ¸ N ^ j 0 = 0 j 0 < N ^ j 1 = j 0 + 1 j 1 < N select(a 2,j 1 ) j 1 invariant {i 0 = 0} {0 · U ^ U < i 1 ) select(a 1,U)=U} {0 · U ^ U < i 2 ) select(a 2,U)=U} {j · U ^ U < N ) select(a 2,U)=U} weak bias prevents constants diverging as 0, succ(0), succ(succ(0)),...

32
i = 0; [i < N]; a[i] = i; i++; [i < N]; a[i] = i; i++; [i >= N]; j = 0; [j < N]; j++; [j < N]; a[j] != j; With strong bias Something like array segmentation functor of C + C + Logozzo note: it so happened here our first try a widening was safe, but this may not always be so....

33
Comparison Axioms and proof bias are generic – –Little domain knowledge is represented Uses a generic theorem prover to generate local proofs – –No domain specific tuning Not as scalable as the strong bias approach Widening/narrowing Interpolation

34
List deletion example Add a few axioms about reachability Invariant synthesized with 3 unwindings (after some: simplification): a = create_list(); while(a){ tmp = a->next; free(a); a = tmp; } {rea(next,a,nil) ^ 8 x (rea(next,a,x) ! x = nil _ alloc(x))} No need to craft a new specialized domain for linked lists. Weak bias can be used in cases where domain knowledge is lacking.

35
Are interpolants widenings? A safe widening sequence is an interpolant. An interpolant is not necessarily a widening sequence, however. –Does not satisfy the expansion property –Does not satisfy the eventual stability property as we increase the sequence length. A consequence of giving up stabilization is that inductive invariants (post-fixed points) are typically found in the middle of the sequence, not at an eventual stabilization point. –Early formulas tend to be too strong (influenced by initial condition) –Late formulas tend to be too weak (influenced by final condition)

36
Typical interpolant sequence x = y = 0; x++; y++; [x!=0]; x--; y--; [x!=0]; x--; y--; [x == 0] [y != 0] {False} {True} Too strong Too weak Weakened, but not expansive Does not stabilize at invariant No matter how far we unwind, we may not get stabilization

37
Conclusion Widening/narrowing and interpolation are methods of generalizing from bounded to unbounded proofs Formally, widening/narrowing satisfies stronger conditions soundnessexpanding/contractingstabilizingwidening/narrowingsoundness interpolation stabilization is not obtained when proving properties, however

38
Conclusion, cont. Heuristically, the difference is weak v. strong bias restricted proof system incompleteness smaller search space domain knowledge efficient representations strong bias rich proof system completeness large search space Occam's razor generic representations weak bias Can we combine strong and weak heuristics? – –Fall back on weak heuristics when strong fails – –Use weak heuristics to handle combinatorial complexity – –Build known widenings into theory solvers in SMT?

Similar presentations

OK

Quantified Invariant Generation using an Interpolating Saturation Prover Ken McMillan Cadence Research Labs TexPoint fonts used in EMF: A A A A A.

Quantified Invariant Generation using an Interpolating Saturation Prover Ken McMillan Cadence Research Labs TexPoint fonts used in EMF: A A A A A.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google