Presentation is loading. Please wait.

Presentation is loading. Please wait.

Benefits of Bounded Model Checking at an Industrial Setting F.Copty, L. Fix, R.Fraer, E.Giunchiglia*, G. Kamhi, A.Tacchella*, M.Y.Vardi** Intel Corp.,

Similar presentations


Presentation on theme: "Benefits of Bounded Model Checking at an Industrial Setting F.Copty, L. Fix, R.Fraer, E.Giunchiglia*, G. Kamhi, A.Tacchella*, M.Y.Vardi** Intel Corp.,"— Presentation transcript:

1 Benefits of Bounded Model Checking at an Industrial Setting F.Copty, L. Fix, R.Fraer, E.Giunchiglia*, G. Kamhi, A.Tacchella*, M.Y.Vardi** Intel Corp., Haifa, Israel *Università di Genova, Genova, Italy **Rice University, Houston (TX), USA

2 Technical framework  Symbolic Model Checking (MC) Over 10 years of successful application in formal verification of hardware and protocols Traditionally based on reduced ordered Binary Decision Diagrams (BDDs)  Symbolic Bounded Model Checking (BMC) Introduced recently, but shown to be extremely effective for falsification (bug hunting) Based on propositional satisfiability (SAT) solvers

3 Open points  Why is BMC effective? Because the search is bounded, and/or......because it uses SAT solvers instead of BDDs?  What is the impact of BMC on industrial-size verification test-cases? Traditional measures: performance and capacity A new perspective: productivity

4 Our contribution  Apples-to-apples comparison Expert’s tuning both on BDDs and SAT sides  optimal setting for SAT by tuning search heuristics BDD-based BMC vs. SAT-based BMC  using SAT (rather than bounding) is a win  A new perspective of BMC on industrial test-cases BMC performance and capacity  SAT capacity reaches far beyond BDDs SAT-based BMC productivity  greater capacity + optimal setting = productivity boost

5 Agenda  BMC techniques Implementing BDD-based BMC SAT-based BMC: algorithm, solver and strategies  Evaluating BMC at an industrial setting BMC tools: Forecast (BDDs) and Thunder (SAT) Measuring performance and capacity  In search of an optimal setting for Thunder and Forecast  Thunder vs. Forecast  Thunder capacity boost Measuring productivity  Witnessed benefits of BMC

6 BFS traversal Buggy states Initial states Counterexample trace

7 From BDD-based MC to BMC Adapting state-of-the-art BDD techniques to BMC  Bounded prioritized traversal When the BDD size reaches a certain threshold...... split the frontier into balanced partitions, and...... prioritize the partitions according to some criterion Ensure bound is not exceeded  Bounded lazy traversal Works backwards Application of bounded cone of influence

8 SAT-based BMC Bound (k=4) Sat Unsat Increase k? SAT solver

9 SAT solvers  Input: a propositional formula F( x 1,..., x n )  Output: a valuation v = v 1,..., v n with v i  {0,1} s.t. F( v 1,..., v n ) = 1  A program that can answer the question “there exists v s.t. F( v ) = 1” is a SAT solver  Focus on solving SAT By exploring the space of possible assignments Using a sound and complete method  Stålmarck’s (patented)  Davis-Logemann-Loveland (DLL)

10 DLL method s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next HR, LB or SAT LA or UNSAT LA or SAT s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next s = {F,v} is an object next  { SAT, UNSAT, LA, LB, HR } is a variable DLL-S OLVE (s) 1 next  LA 2 repeat 3 case next of 4 LA : next  L OOK -A HEAD (s) 5 LB : next  L OOK -B ACK (s) 6 HR : next  H EURISTIC (s) 7 Until next  { SAT, UNSAT } 8 return next

11 SIMO: a DLL-based SAT solver  Boolean Constraint Propagation (BCP) is the only Look-Ahead strategy  Non-chronological Look-Back Backjumping (BJ): escapes trivially unsatisfiable subtrees Learning: dynamically adds constraints to the formula  Search heuristics Static: branching order is supplied by the user Dynamic  Greedy heuristics: simplify as many clauses as possible  BCP-based: explore most constrained choices first Independent (relevant) vs. dependent variables

12 SIMO’s search heuristics ScoringSelectionPropagation All Moms Relevant Morel Relevant All Relevant All Unirel AllUnirel2 AllUnit

13 Forecast: BDD-based (B)MC …Intel’s BDD Forecast Interface to BDD engines Spec SynthesisRTL synthesis CALCUDD Directives Proof/Counterexample Property (ForSpec)Model (HDL) Model Checking Algorithms

14 Thunder: SAT-based BMC GRASPSIMO Thunder Interface to SAT engines Spec SynthesisRTL synthesis SATOProver Directives Proof/Counterexample Property (ForSpec)Model (HDL) Formula generation ++

15 Performance and capacity  Performance (what resources?) CPU time Memory consumption  Capacity (what model size?) BDD technology tops at 400 state variables (typically) SAT technology has subtle limitations depending on:  The kind of property being checked  The length of the counterexample

16 Measuring performance  Benchmarks to measure performance are Focusing on safety properties Challenging for BDD-based model checking In the capacity range of BDD-based model checking  In more detail A total 17 circuits coming from Intel’s internal selection with known counterexample minimal length k Using 2 formulas per circuit with Thunder/SIMO flow  A satisfiable instance (falsification) at bound k, and  An unsatisfiable instance (verification) at bound k-1

17 An optimal setting for Thunder  With BJ + learning enabled... ... we tried different heuristics Moms (M) and Morel (MR) Unit (U), Unirel (UR) and Unirel2 (UR2)  SIMO admits a single optimal setting (UR2) Faster on the instances solved by all the heuristics (16) Solves all instances in less than 20 minutes of CPU time  Unirel2 is the default setting with the Thunder/SIMO flow

18 Bounded traversal in Forecast  With automatically derived initial order Bounded lazy (ABL) Bounded prioritized (ABP) Unbounded prioritized (AUP)  bounding does not yield consistent improvements!  With semi-automatically derived initial order Bounded settings (SBL, SBP) Unbounded prioritized (SUP)  bounding does not yield consistent improvements!

19 An optimal setting for Forecast?  Default setting is AUP Best approximates the notion of default setting in Thunder AUP is the the best among A’s  Tuned setting (ST) Semi-automatic intial order Specific combinations of:  Unbounded traversal  Prioritized traversal  Lazy strategy  Partitioning the trans. relation  No single optimal tuned setting for Forecast

20 Thunder vs. Forecast  Forecast default AUP is worse than Thunder UR2  Forecast tuned ST compares well with Thunder UR2  Forecast ST time does not include: Getting pruning directives Finding a good initial order Getting the best setting

21 Measuring capacity  The capacity benchmark is derived from the performance benchmark Getting rid of the pruning directives supplied by the experienced users Enlarging the size of the model beyond the scope of BDD-based MC  Unpruned models for this analysis… …have thousands sequential elements (up to 10k) …are out of the capacity for Forecast

22 Thunder capacity boost Latches+Inputs (after pruning) Variables in SAT formula Thunder CPU time Circuit 1(5)1201115268316.10 Circuit 1(4)1201115254035.10 Circuit 2(7)70546612448796.10 Circuit 2(6)70546612055216.37 Circuit 3(11)6586112911924878.61 Circuit 3(10)6586112910783868.20 Circuit 4970410692135129.39 Circuit 5172625542TIMEOUT Circuit 668322936121786576.24 Circuit 733215323575273.32 Circuit 81457101250758267.91

23 Measuring productivity  Productivity decreases with user intervention Need to reduce the model size Need to find a good order on state variables Need to find a good tool setting  No user intervention  no productivity penalty Using Thunder/SIMO BMC flow:  Dynamic search heuristic: no need for an initial order  Single optimal setting: Unirel2 (with BJ and learning)  Extended capacity: no manual pruning Comparison with Forecast BMC flow indicates that SAT (rather than bounding) is the key for better productivity

24  A single optimal setting found for Thunder using SIMO: Unirel2 with backjumping and learning  SAT (rather than bounding) turns out to be the key benefit when using BMC technology  A complete evaluation Performance of tuned BDDs parallels SAT Impressive capacity of SAT vs. BDDs SAT wins from the productivity standpoint Witnessed benefits of BMC

25 Useful links  The version of the paper with the correct numbers in the capacity benchmarks: www.cs.rice.edu/~vardi www.cs.rice.edu/~tac  More information about SIMO: www.cs.rice.edu/CS/Verification www.mrg.dist.unige.it/star


Download ppt "Benefits of Bounded Model Checking at an Industrial Setting F.Copty, L. Fix, R.Fraer, E.Giunchiglia*, G. Kamhi, A.Tacchella*, M.Y.Vardi** Intel Corp.,"

Similar presentations


Ads by Google