Download presentation

Presentation is loading. Please wait.

Published byBailey Walker Modified over 4 years ago

1
**Compositional Methods and Symbolic Model Checking**

Ken McMillan Cadence Berkeley Labs 1

2
**Compositional methods**

Reduce large verification problems to small ones by Decomposition Abstraction Specialization etc. Based on symbolic model checking System level verification Will consider the implications of such an approach for symbolic model checking

3
**Example -- Cache coherence**

(Eiriksson 98) INTF P M IO to net Nondeterministic abstract model Atomic actions Single address abstraction Verified coherence, etc... S/F network protocol host Distributed cache coherence

4
**Refinement to RTL level**

Abstract model host other hosts S/F network protocol refinement relations TAGS CAM TABLES RTL implementation (~30K lines of verilog)

5
**Contrast to block level verification**

Block verification approach to capacity problem isolate small blocks place ad hoc constraints on inputs This is falsification because constraints are not verified block interactions not exposed to verification Result: FV does not replace any simulation activity

6
**What are the implications for SMC?**

Verification and falsification have different needs Proof is as strong as its weakest link Hence, approximation methods are not attractive. Importance of predictability and metrics Must have reliable decomposition strategies Implications of using linear vs. branching time. p q r s t

7
**Predictability Require metrics that predict model checking hardness**

Most important is number of state variables 1 Verification probability verification falsification # state bits reduction reduction original system Powerful MC can save steps, but is not essential Predictability more important than capacity

8
**Example -- simple pipeline**

32 registers + bypass 32 bits control Goal: prove equivalence to unpipelined model (modulo delay)

9
**Direct approach by model checking**

reference model delay ? = ops pipeline Model checking completely intractable due to large number of state variables ( > 2048 )

10
**Compositional refinement verification**

Abstract model Translations System

11
**Localized verification**

Abstract model Translations assume prove System

12
**Localized verification**

Abstract model Translations assume prove System

13
**Circular inference rule**

f1 up to t -1 implies f2 up to t f2 up to t -1 implies f1 up to t always f1 and f2 SPEC (related: AL 95, AH 96)

14
**Decomposition for simple pipeline**

32 bits 32 registers control + = operand correctness = result correctness correct values from reference model

15
**Lemmas in SMV Operand correctness layer L1: if(stage2.valid){**

stage2.opra := stage2.aux.opra; stage2.oprb := stage2.aux.oprb; stage2.res := stage2.aux.res; }

16
**Effect of decomposition**

32 bits 32 registers control + assumed correct values from reference model proved Bit slicing results from "cone of influence reduction" (similarly in reference model)

17
**Resulting MC performance**

Operand correctness property 80 state variables 3rd order fit Result correctness property easy: comparison of 32 bit adders

18
**NOT! Previous slide showed hand picked variable order**

Actually, BDD's blow up due to bad variable ordering ordering based on topological distance

19
**Problem with topological ordering**

ref. reg. file results ? = bypass logic impl. reg. file Register files should be interleaved, but this is not evident from topology

20
**Sifting to the rescue (?)**

Note: - Log scale - High variance Lessons (?) : Cannot expect to solve PSPACE problems reliably Need a strategy to deal with heuristic failure

21
**Predictability and metrics**

Reducing the number of state variables 1 Verification probability # state bits ? 2048 bits decomposition 80 bits ~600 orders of magnitude in state space size If heuristics fail, other reductions are available

22
**Big structures and path splitting**

SPEC P A P i

23
**Temporal case splitting**

Prove separately that p holds at all times when v = i. Path splitting record register index v i

24
**Case split for simple pipeline**

Show only correctness for operands fetched from register i forall(i in REG) subcase L1[i] of stage2.opra//L1 for stage2.aux.srca = i; Abstract remaining registers to "bottom" Result 23 state bits in model Checking one case = ~1 sec What about the 32 cases?

25
**Exploiting symmetry Symmetric types**

Semantics invariant under permutations of type. Enforced by type checking rules. Symmetry reduction rule Choose a set of representative cases under symmetry Type REG is symmetric One representative case is sufficient (~1 sec) Estimated time savings from case split: 5 orders But wait, there's more...

26
**Data type reductions Problem: types with large ranges**

Solution: reduce large (or infinite) types where T\i represents all the values in T except i. Abstract interpretation

27
**Type reduction for simple pipeline**

Only register i is relevant Reduce type REG to two values: using REG->{i} prove stage2.opra//L1[i]; Number of state bits is now 11 Verification time is now independent of register file size. Note: can also abstract out arithmetic verification using uninterpreted functions...

28
Effect of reduction 1 Verification probability # state bits 11 84 2048 original system reduction reduction Manual decomposition produces order of magnitude reductions in number of state bits Inflexion point in curve crossed very rapidly

29
**Desirata for model checking methods**

Importance of predictability and metrics Proof strategy based on reliable metric (# state bits) Prefer reliable performance in given range to occasional success on large problems * e.g., stabilize variable ordering Methods that diverge unpredictably for small problems are less useful (e.g., infinite state, widening) Moderate performance improvements are not that important Reduction steps gain multiple orders of magnitude Approximations not appropriate * given PSPACE completeness

30
**Linear v branching time**

Model checking v compositional verification fixed model for all models Verification complexity (in formula size) compositional model checking CTL LTL linear EXP PSPACE In practice, with LTL, we can mostly recover linear complexity...

31
**Avoiding "tableau variables"**

Problem: added state variables for LTL operators Eliminating tableau variables Push path quantifiers inward (LTL to CTL*) Transition formulas (CTL+) Extract transition and fairness constraints

32
**Translating LTL to CTL***

Rewrite rules In addition, if p is boolean, no rule By adding path quantifiers, we eliminate tableau variables

33
**Rewrites that don't work**

q p p p q q p p

34
**(note singly nested fixed point)**

Examples LTL formulas that translate to CTL formulas (note singly nested fixed point) Incomplete rewriting (to CTL*) Note: 3 tableau variables reduced to 1 Conjecture: all resulting formulas are forward checkable

35
**Transition modalities**

Transition formulas CTL+ state modalities where p is a transition formula Example CTL+ formulas CTL+ still checkable in linear time

36
**Constraint extraction**

Extracting path constraints where p is a transition formula Using rewriting and above... w/ fairness const. Circular compositional reasoning If G, D, Q and f are transition formulas, this is in CTL+, hence complexity is linear Note: typically, G, D, Q are very large, and f is small

37
**Effect of reducing LTL to CTL+**

In practice, tableau variables rarely needed Thus, complexity exponential only in # of state variables Important metric for proof strategy Doubly nested fixed points used only where needed I.e., when fairness constraints apply Forward and backward traversal possible Curious point: backward is commonly faster in refinement verification

38
**SMC for compositional verification**

BDD's are great fun, but... Cannot expect to solve PSPACE complete problems reliably User reductions provide fallback when heuristics fail Robust metrics are important to proof strategy Each user reductions gains many orders of magnitude Modest performance improvements not very important Exact verification is important Must be able to handle linear time efficiently

Similar presentations

OK

Functional Decompositions for Hardware Verification With a few speculations on formal methods for embedded systems Ken McMillan.

Functional Decompositions for Hardware Verification With a few speculations on formal methods for embedded systems Ken McMillan.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google