Presentation is loading. Please wait.

Presentation is loading. Please wait.

What Can the SAT Experience Teach Us About Abstraction? Ken McMillan Cadence Berkeley Labs.

Similar presentations


Presentation on theme: "What Can the SAT Experience Teach Us About Abstraction? Ken McMillan Cadence Berkeley Labs."— Presentation transcript:

1 What Can the SAT Experience Teach Us About Abstraction? Ken McMillan Cadence Berkeley Labs

2 Abstraction and SAT Abstraction is key to applying formal verification to real systems –Has allowed verification of simple properties of large systems In hardware, > 20K registers In software, > 100K lines of code –Extract just enough information from system to prove property Exactly how to do this was far from clear at first Enabler: advances in Boolean SAT solvers –Exploit the ability of the solver for focus proof on relevant facts

3 Counterexample guided localization Model check abstraction Choose initial abstraction Concretize Cex? Refine abstraction true, done Cex yes, Cex no Kurshan Apply SAT

4 Outline Careful look at modern SAT solvers –How do they work? –What general lessons about abstraction can we learn from the experience? Survey of current abstraction techniques –How various methods do or do not embody lessons from SAT A modest proposal –An attempt to apply the lessons of SAT to software verification

5 SAT solvers DP variable elimination DLL backtrack search DPLL SATO, GRASP,CHAFF,etc Combine search and deduction

6 DP: the Eager Approach Eliminate one variable at a time by exhaustive resolution a _ b a _ c : a _ d : a _ e : b _ : c : d b _ d b _ e c _ d c _ e : c _ d : c _ e d d _ e e False drawback: eager approach yields many irrelevant deductions

7 DPLL: mixed approach Combination model search and deduction Solvers characterized by –Exhaustive BCP –Conflict-driven learning (resolution) –Deduction-based decision heuristics

8 DPLL approach (  a  b)  (  b  c  d)  (  b   d) cc a Decisions b d Conflict! (  b  c ) resolve Learned clause BCP guides clause learning by resolution Learning generalizes failures Learning guides decisions (VSIDS)

9 Two kinds of deduction Closing this loop focuses solver on relevant deductions –Effectiveness of SAT solvers in guiding abstraction –Very good performance What general lessons can we learn from this architecture? Case Splits Propagation case-based lightweight exhaustive Generalization general guided

10 Lesson #1: Be Lazy DP approach –Eliminate variables by exhaustive resolution –Extremely eager: deduces all facts about remaining variables –Essentially quantifier elimination -- explodes. DPLL approach –Lazy: only resolves clauses when model search fails –Resolution use as a form of failure generalization Learns general facts from model search failure Implications: 1. Make expensive deductions only when their relevance can be justified. 2.Don't do quantifier elimination.

11 Lesson #2: Be Eager In a DPLL solver, we always close deduction under unit resolution (BCP) before making a decision. –Guides decision making model search –Guides resolution steps in failure generalization –BCP updated after decision making and clause learning Implications: 1.Be eager with inexpensive deduction. 2.Deduce all the cheap facts before trying any expensive ones. 3.Let the cheap deduction guide the expensive deduction

12 Lesson #3: Learn from the Past Facts useful in one particular case are likely to be useful in other cases. This principle is embodied in –Clause learning –Deduction-based decision heuristics (e.g., VSIDS) Implication: Deduce facts that have been useful in the past.

13 Current abstraction methods Focus on software model checking Do these methods embody the SAT lessons?

14 Static Analysis Compute the least fixed-point of an abstract transformer –This is the strongest inductive invariant the analysis can provide Inexpensive analyses: –value set analysis –affine equalities, etc. These analyses lose information at a merge: x = y x = z T Be eager with inexpensive deductions Be lazy with expensive deductionsX Learn from the pastN/A

15 Predicate abstraction Abstract transformer: –strongest Boolean postcondition over given predicates Advantage: does not lose information at a merge –join is disjunction x = y x = z x=y _ x=z Disadvantage: –Abstract post is very expensive! –Computes information about predicates with no relevance justification Be eager with inexpensive deductionsX Be lazy with expensive deductionsX Learn from the pastN/A

16 PA with CEGAR loop Model check abstraction T # Choose initial T # Can extend Cex from T # to T? Add predicates to T # true, done Cex yes, Cex no Choose predicates to refute cex's –Generalizes failures –Some relevance justification Still performs expensive deduction without justification –strongest Boolean postcondition Fails to learn from past –Start fresh each iteration –Forgets expensive deductions Be eager with inexpensive deductionsX Be lazy with expensive deductionsX+ Learn from the pastX

17 Boolean Programs Abstract transformer –Weaker than predicate abstraction –Evaluates predicates independently -- loses correlations {T} x=y; {x=0, y=0} Predicate abstraction {T} x=y; {T} Boolean programs Advantages –Computes less expensive information eagerly –Disadvantages –Still computes expensive information without justification –Still uses CEGAR loop Be eager with inexpensive deductionsX Be lazy with expensive deductionsX++ Learn from the pastX

18 Lazy Predicate Abstraction Unwind the program CFG into a tree –Refine paths as needed to refute errors ERR! x=y y=0 Add predicates along path to allow refutation of error Refinement is local to an error path Search continues after refinement –Do not start fresh -- no big CEGAR loop Previously useful predicates applied to new vertices

19 Lazy Predicate Abstraction ERR! x=y y=0 Add predicates along path to allow refutation of error Refinement is local to an error path Search continues after refinement –Do not start fresh -- no big CEGAR loop Previously useful predicates applied to new vertices Be eager with inexpensive deductionsX Be lazy with expensive deductions - Learn from the past

20 SAT-based BMC Inherits all the properties of SAT Deduction limited to propositional logic –Cannot directly infer facts like x · y –Inexpensive deduction limited to BCP Program Loop Unwinding Convert to Bit Level SAT Be eager with inexpensive deductions -- Be lazy with expensive deductions Learn from the past

21 SAT-based with Static Analysis Allows richer class of inexpensive deductions Inexpensive deductions not updated after decisions and clause learning – Coupling could be tighter –Perhaps using lazy decision procedures? Program Loop Unwinding Convert to Bit Level SAT Static Analysis x=y;x=z; x=z decision Be eager with inexpensive deductions - Be lazy with expensive deductions Learn from the past

22 Lazy abstraction and interpolants A way to apply the lessons of SAT to lazy abstraction Keep the advantages of lazy abstraction... –Local refinement (be lazy) –No "big loop" as in CEGAR (learn from the past)...while avoiding the disadvantages of predicate abstraction... –no eager image computation...and propagating inexpensive deductions eagerly –as in static analysis

23 Interpolation Lemma Notation: L (  ) is the set of FO formulas over the symbols of  If A  B = false, there exists an interpolant A' for (A,B) such that: A  A' A' ^ B = false A' 2 L (A) \ L (B) Example: –A = p  q, B =  q  r, A' = q Interpolants from proofs –in certain quantifier-free theories, we can obtain an interpolant for a pair A,B from a refutation in linear time. –in particular, we can have linear arithmetic, uninterpreted functions, and restricted use of arrays (Craig,57)

24 Interpolants for sequences Let A 1...A n be a sequence of formulas A sequence A’ 0...A’ n is an interpolant for A 1...A n when –A’ 0 = True –A’ i -1 ^ A i ) A’ i, for i = 1..n –A n = False –and finally, A’ i 2 L (A 1...A i ) \ L (A i+1...A n ) A1A1 A2A2 A3A3 AkAk... A' 1 A' 2 A' 3 A' k-1... TrueFalse )))) In other words, the interpolant is a structured refutation of A 1...A n

25 Interpolants as Floyd-Hoare proofs False x 1 =y 0 True y 1 >x 1 ) ) ) 1. Each formula implies the next 2. Each is over common symbols of prefix and suffix 3. Begins with true, ends with false Path refinement procedure SSA sequence Prover Interpolation Path Refinement proof structured proof x=y; y++; [x=y] x 1 = y 0 y 1 =y 0 +1 x1y1x1y1

26 Lazy abstraction -- an example do{ lock(); old = new; if(*){ unlock; new++; } } while (new != old); program fragment L=0 L=1; old=new [L!=0] L=0; new++ [new==old] [new!=old] control-flow graph

27 1 L=0 T 2 [L!=0] T Unwinding the CFG L=0 L=1; old=new [L!=0] L=0; new++ [new==old] [new!=old] control-flow graph 0 T F L=0 Label error state with false, by refining labels on path

28 6 [L!=0] T 5 [new!=old] T 4 L=0; new++ T 3 L=1; old=new T Unwinding the CFG L=0 L=1; old=new [L!=0] L=0; new++ [new==old] [new!=old] control-flow graph 0 12 L=0 [L!=0] F L=0 F T Covering: state 5 is subsumed by state 1.

29 T 11 [L!=0] T 10 [new!=old] T 8 T Unwinding the CFG L=0 L=1; old=new [L!=0] L=0; new++ [new==old] [new!=old] control-flow graph 0 12 3 4 5 L=0 L=1; old=new [L!=0] L=0; new++ [new!=old] F L=0 6 [L!=0] F L=0 7 [new==old] T old=new F F T Another cover. Unwinding is now complete. 9 T

30 Covering step If  (x) )  (y)... –add covering arc x B y –remove all z B w for w descendant of y x · y x=y X We restict covers to be descending in a suitable total order on vertices. This prevents covering from diverging.

31 Refinement step Label an error vertex False by refining the path to that vertex with an interpolant for that path. By refining with interpolants, we avoid predicate image computation. T T T T T T T x = 0 [x=y] [x  y] y++ [y=0] y=2 x=0 y=0 y0y0 F X Refinement may remove covers

32 Forced cover Try to refine a sub-path to force a cover –show that path from nearest common ancestor of x,y proves  (x) at y T T T T T T T x = 0 [x=y] [x  y] y++ [y=0] y=2 x=0 y=0 y0y0 F refine this path y0y0 Forced cover allow us to efficiently handle nested control structure and is analogous to non-cronological backtracking

33 T [x=z] [x  z] y=1 y=2 y 2 {1,2} [y=1 ^ x  z] Incremental static analysis Update static analysis of unwinding incrementally –Static analysis can prevent many interpolant-based refinements –Interpolant-based refinements can refine static analysis T T T T T T T x = 0 [x=y] [x  y] y++ [y=0] y=2 x=0 y=0 y0y0 F y=2 from value set analysis x=z F refine this path y=2 value set refined

34 Two kinds of deduction Case Splits Propagation case-based lightweight exhaustive Generalization general guided by search = path splits = static analysis = interpolation

35 Applying the lessons from SAT Be lazy with expensive deductions –All path refinements justified –No eager predicate image computation Be eager with inexpensive deductions –Static analysis updated after all changes –Refinement and static analysis interact Learn from the past – Refinements incremental – no “big CEGAR loop” – Re-use of historically useful facts by forced covering

36 Windows driver benchmarks Windows device driver benchmarks from BLAST benchmark suite Compare –BLAST, a lazy predicate abstraction-base mode checker –IMPACT, using lazy interpolation-based abstraction. Almost all BLAST time spent in predicate image operation. namesource LOC flat LOC BLAST (s) IMPACT (s) BLAST IMPACT kbfiltr12K2.3K11.90.3534 diskperf14K3.9K1172.3749 cdaudio44K6.3K2021.51134 floppy18K8.7K1644.0941 parclass138K8.8K4633.84121 parport61K13K3246.4750

37 Performance, Impact v. Blast Blast runtime (s) Impact runtime (s)

38 Effect of static analysis Impact without static analysis (s) Impact runtime (s)

39 Comparing Blast and Impact Similarities –Both based on lazy abstraction –Both use the same interpolating prover Differences –Interpolants instead of predicate abstraction Avoids eager deduction –Re-use of facts by “forced covering” Analogous to non-cronological backtracking –Static analysis Interacts with interpolants Fundamentally similar model checkers, but as in SAT, implementation details can make orders-of-magnitude difference.

40 Conclusions SAT solvers provide a paradigm for efficient abstraction –Interaction of case splitting, propagation and generalization Current abstraction methods do not fully apply this paradigm –Mostly indirectly, through use of SAT –(Same can be said of SMT solvers) To apply the SAT lessons in a given application, look for –Useful case splits (e.g., program paths) –Inexpensive propagation (e.g., value set analysis) –Guided generalization (e.g., by interpolation) Also, avoid over-eager deduction and the “big abstraction loop” Result can be order-of-magnitude improvement, as in modern SAT solvers.

41 Conclusions Caveats –Comparing different implementations is dangerous –More and better software model checking benchmarks are needed Tentative conclusions –For control-dominated codes, predicate abstraction is too "eager“ better to be more lazy about expensive deductions –Propagate inexpensive deductions can produce substantial speedup roughly one order of magnitude for Windows examples –Perhaps by applying the lessons of SAT, we can obtain the same kind of rapid performance improvements obtained in that area Note 2-3 orders of magnitude speedup in lazy model checking in 6 months!

42 Future work Procedure summaries –Many similar subgraphs in unwinding due to procedure expansions –Cannot handle recursion –Can we use interpolants to compute approximate procedure summaries? Quantified interpolants –Can be used to generate program invariants with quantifiers –Works for simple examples, but need to prevent number of quantifiers from increasing without bound Richer theories –In this work, all program variables modeled by integers –Need an interpolating prover for bit vector theory Concurrency...

43 Unwinding the CFG An unwinding is a tree with an embedding in the CFG L=0 L=1; old=new [L!=0] L=0; new++ [new==old] [new!=old] 8 0 12 3 4 L=0 L=1; old=new [L!=0] L=0; new++ MvMv MeMe

44 Expansion Every non-leaf vertex of the unwinding must be fully expanded... L=0 0 1 MvMv MeMe If this is not a leaf......and this exists......then this exists....but we allow unexpanded leaves (i.e., we are building a finite prefix of the infinite unwinding)

45 Labeled unwinding A labeled unwinding is equiped with... –a lableing function  : V ! L (S) –a covering relation B µ V £ V 0 12 3 4 5 L=0 L=1; old=new [L!=0] L=0; new++ [new!=old] 6 [L!=0] 7 [new==old] T F L=0 F T T These two nodes are covered. (have a ancestor at the tail of a covering arc)...

46 Well-labeled unwinding An unwinding is well-labeled when... –  (  ) = True –every edge is a valid Hoare triple –if x B y then y not covered 0 12 3 4 5 L=0 L=1; old=new [L!=0] L=0; new++ [new!=old] 6 [L!=0] 7 [new==old] T F L=0 F T T

47 Safe and complete An unwinding is –safe if every error vertex is labeled False –complete if every nonterminal leaf is covered T 10 [L!=0] T 9 [new!=old] T 8 T 0 12 3 4 5 L=0 L=1; old=new [L!=0] L=0; new++ [new!=old] F L=0 6 [L!=0] F L=0 7 [new==old] T old=new F F T... Theorem: A CFG with a safe complete unwinding is safe. 9 T

48 Unwinding steps Three basic operations: –Expand a nonterminal leaf –Cover: add a covering arc –Refine: strengthen labels along a path so error vertex labeled False

49 Overall algorithm 1.Do as much covering as possible 2.If a leaf can't be covered, try forced covering 3.If the leaf still can't be covered, expand it 4.Label all error states False by refining with an interpolant 5.Continue until unwinding is safe and complete


Download ppt "What Can the SAT Experience Teach Us About Abstraction? Ken McMillan Cadence Berkeley Labs."

Similar presentations


Ads by Google