Presentation is loading. Please wait.

Presentation is loading. Please wait.

GLA: Gate-Level Abstraction Revisited

Similar presentations


Presentation on theme: "GLA: Gate-Level Abstraction Revisited"— Presentation transcript:

1 GLA: Gate-Level Abstraction Revisited
Alan Mishchenko Niklas Een Robert Brayton Department of EECS, UC Berkeley Jason Baumgartner Hari Mony Pradeep Nalla IBM Systems and Technology Group

2 Overview Motivation Our contributions Experimental results Conclusion

3 Formal Verification (FV)
FV attempts to automatically prove properties of hardware designs Safety model checking tries to prove that some undesirable behavior never happen For example, mutual exclusion, that is, two agents do not simultaneously access a shared resource

4 Deriving an Instance of the Verification Problem
Problem formulation Sequential miter Property output Property output Given are: Hardware design Property to be checked Property monitor Combinational logic gates and flip-flops with initial state Hardware design

5 Motivation Safety model checking has many applications
Verification problems can be large (1M-10M gates) However, it is often possible to complete the proof without looking at the whole instance (~1% may be enough) Localization abstraction decides what part of the instance to look at Our work is motivated by the need to increase scalability of abstraction beyond what is currently available Gates included in the abstraction Gates excluded from the abstraction Location abstraction Sequential miter

6 Classification of Abstraction Methods
Automatic vs. manual SAT-based vs. BDD-based vs. other Proof-based vs. CEX-based vs. hybrid Flop-level vs. gate-level The proposed approach is: Automatic (derived automatically by the tool) SAT-based (built on top of efficient BMC engine) Hybrid (uses both counter-examples and proofs) Gate-level (uses individual gates as building blocks)

7 What is BMC? BMC stands for Bounded Model Checking
BMC checks the property in the initial state and the following clock cycles (time frames) In practice, BMC incrementally unfolds the sequential circuit and runs a SAT solver on each time frame of the unrolled combinational circuit If a bug is detected, BMC stops This goes on while resource limits allow Frame 3 Frame 2 primary output flop inputs Frame 1 combinational logic Frame 0 primary inputs flop outputs Unfolding of sequential circuit Sequential circuit

8 Why BMC Works Well? BMC engine adds the complete “tent” (bounded cone-of-influence) in each frame This quickly leads to large SAT instances However, BMC has been successfully applied to designs with millions of nodes for hundreds/thousands of time frames The reason is: In efficient implementations of BMC, constants are propagated and structural hashing is performed for the logic across the time frame boundaries Frame 3 Frame 2 Frame 1 Frame 0

9 How Abstraction is Implemented?
Hybrid abstraction (Een et al, FMCAD’10) combines counter-example-based abstraction and proof-based abstraction in one engine (using one SAT solver) The hybrid abstraction engine is an extension of the BMC engine Counter-examples are used to grow abstraction Proofs are used to prune irrelevant logic Frame 3 Frame 2 Frame 1 Frame 0 Unfolding of the abstracted model

10 Why Traditional Abstraction is Less Scalable Than BMC?
The key to BMC’s scalability is constant propagation and structural hashing However, in the abstraction engine, these are not allowed because the complete resolution proof of each UNSAT call is needed to perform a proof-based abstraction Our contributions: (1) Bypass the need for complete proof, resulting in increased scalability (2) Compute incremental UNSAT cores, resulting in drastic memory savings Frame 3 Frame 2 Frame 1 Frame 0 Unfolding of the abstracted model

11 How This is Achieved? Old gates added in the previous time frames New gates added during refinement in this time frame (1) Bypass the need for complete proof, resulting in increased scalability Simplify old gates added in the previous time frames Perform proof-logging in terms of new gates added during refinement in the current time frame (2) Compute incremental UNSAT cores, resulting in drastic memory savings Use bit-strings to represent simplified proof recorded for each learned clause Perform reduced proof-logging using bit-wise operations Frame 3 Frame 2 Frame 1 Frame 0 Unfolding of the abstracted model

12 Why Memory Is Saved? Assume proof logging is performed
Old gates added in the previous time frames New gates added during refinement in this time frame Assume proof logging is performed 25 literals in each learned clause 100 bytes for the clause 100 antecedents in each learned clause 400 bytes for the proof 1M learned clauses with antecedents 500 MB for complete proof Assume simplified proof-logging is performed 200 different proof IDs used 25 bytes per clause 100K learned clauses in the incremental proof 2.5 MB for incremental proof Memory reduction: 200x Frame 3 Frame 2 Frame 1 Frame 0 Unfolding of the abstracted model

13 Components of Abstraction Engine
BMC engine AIG package SAT solver CNF computation Technology mapper UNSAT core computation Proof logger Refinement engine Counter-example simulation Circuit analysis (UNSAT core computation)

14 Abstraction Algorithm (GLA)
abstraction deriveAbstraction ( sequential miter A, parameters P ) { initialize abstraction to contain only the output gate; for ( f = 0; f < Limit; f = f + 1 ) { // perform BMC on the current abstraction unroll current abstraction in frame f with simplification; status = callSAT( unrolled abstraction, resource limits ); if ( status == SAT ) { // abstraction refinement is needed bookmark the current state of the SAT solver; while ( status == SAT ) { perform priority-based abstraction refinement; add new gates to all frames without simplification; status = callSAT( unrolled abstraction, resource limits); } assert( status == UNSAT ); // BMC is UNSAT in frame f at this point if ( abstraction refinement took place ) { compute incremental UNSAT core in terms of new gates added in frame f; rollback SAT solver to the previous bookmark; add the UNSAT core to all time frames with simplification; return “current abstraction”;

15 A Typical Run of GLA abc 02> &r klm.aig; &ps; &gla -vf -F 90 -R 0
klm : i/o = / ff = and = lev = 36 Running gate-level abstraction (GLA) with the following parameters: FrameMax = 90 ConfMax = 0 Timeout = 0 RatioMin = 0 % RatioMax = 30 % LrnStart = LrnDelta = 200 LrnRatio = 70 % Skip = 0 SimpleCNF = 0 Dump = 0 Frame % Abs PPI FF LUT Confl Cex Vars Clas Lrns Time Mem 0 : sec 0 MB 1 : sec 0 MB 2 : sec 0 MB 3 : sec 0 MB 5 : sec 0 MB 9 : k sec 0 MB 13 : k sec 0 MB 17 : k 7.97k sec 2 MB 21 : k 2.49k sec 0 MB 29 : k 7.81k sec 2 MB 33 : k 4.41k sec 1 MB 37 : k 5.87k sec 2 MB 41 : k 6.87k sec 2 MB 45 : k 20.8k sec 13 MB 49 : k 25.7k sec 5 MB 53 : k 24.5k sec 6 MB 57 : k 38.5k 1.03k sec 7 MB 61 : k 45.8k 2.17k sec 15 MB 65 : k 68.9k 2.90k sec 16 MB 69 : k 43.1k 2.98k sec 10 MB 73 : k 54.1k 3.41k sec 10 MB 89 : k 74.0k 11.4k sec 12 MB SAT solver completed 90 frames and produced a 16-stable abstraction. Time = sec abc 02> &ps; &gla_derive; &put; pdr klm : i/o = / ff = and = lev = 18 Property proved. Time = sec

16 Experimental Setting Comparing 4 abstraction engines
ABS (flop-based hybrid abstraction - N. Een et al, FMCAD 2010) GLA without simplification and with full proof-logging (&gla –np) GLA without simplification and with incremental proofs (&gla –n) GLA with simplification and with incremental proofs (&gla) Using the suite of IBM benchmarks from the 2011 Hardware Model Checking Competition Benchmarks 6s40p1 and 6s40p2 are removed as easily SAT Running one core of Intel Xeon CPU E GHz Using a 5 min timeout for each benchmark Learned clause removal, abstraction manager restarts and early termination, are disabled The command line is: &gla [-n] [-p] –L 0 –P 0 –R 0 –T 300 16

17 Experimental Results

18 Experimental Results 18

19 Observations In five minutes, GLA finds abstractions that are tested 59% (10%) deeper than those found by ABS (GLAn). GLA produces abstractions that are close to ABS in terms of flops but 36% smaller in terms of AND gates. GLA uses on average 500x less memory for UNSAT cores than GLAnp, which computes a complete proof.

20 Previous Work Flop-level abstraction Gate-level abstraction
N. Een, A. Mishchenko, and N. Amla, "A single-instance incremental SAT formulation of proof- and counterexample-based abstraction", Proc. FMCAD'10. Gate-level abstraction J. Baumgartner and H. Mony, “Maximal Input Reduction of Sequential Netlists via Synergistic Reparameterization and Localization Strategies”. Proc. CHARME’05, pp 20

21 Conclusions Presented a revised approach for SAT-based hybrid gate-level abstraction The approach differs from previous work in that it is more scalable and uses less memory Experimental results show that it is promising 21

22 Future Work Enhancing abstraction refinement by performing a more detailed structural analysis Improving scalability of refining deep failures of abstraction using partial counter-examples Applying similar approach to make interpolation-based model checking more scalable


Download ppt "GLA: Gate-Level Abstraction Revisited"

Similar presentations


Ads by Google