Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University.

Similar presentations


Presentation on theme: "On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University."— Presentation transcript:

1 On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University

2 Disjunctive linear arithmetic  A Boolean combination of predicates of the form   are constants

3 Quantifier Free Presburger formulas  A Boolean combination of predicates of the form   are constants

4 Some Known Techniques  Linear Arithmetic (conjunctions only) Interior point method (Khachian 1979, Karmarkar 1984) (P) Simplex (Dantzig 1949) Fourier-Motzkin elimination Loop residue (Shostak 1984) … Almost all theorem provers use Fourier-Motzkin (PVS, ICS, SVC, IMPS, …)

5 Fourier-Motzkin elimination - example (1) x 1 – x 2 < 0 (2) x 1 – x 3 < 0 (3) -x 1 + 2x 3 + x 2 < 0 (4) -x 3 < -1 Eliminate x 1 Eliminate x 2 Eliminate x 3 (5) 2x 3 < 0 (from 1 and 3) (6) x 2 + x 3 < 0 (from 2 and 3) (7) 0 < -1 (from 4 and 5) Contradiction (the system is unsatisfiable)! Elimination order: x 1, x 2, x 3

6 Fourier-Motzkin elimination A system of conjoined linear inequalities m constraints n variables (1/2)

7 Fourier-Motzkin elimination  Eliminating x n : For all i s.t. a i,n > 0 For all i s.t. a i,n < 0 For all I s.t. a i,n = 0 Each elimination adds (m 1 * m 2 – m 1 – m 2 ) constraints m1m1 m2m2 (2/2)

8 Complexity of Fourier-Motzkin  Worst-case complexity:  Q: Is there an alternative to case-splitting ?  So why is it so popular in verification?  The bottleneck: case splitting.

9 A Combined SAT/FM method  Encode:  ’: e 1  e 2  ( e 3  e 4 )  Repeat: SAT solve  ’. If UNSAT – exit.  is unsatisfiable. Else – Check consistency of assignment. If SAT – exit.  is satisfiable. Else – Backtrack, and apply learning to  ’.  : x 1 - x 2 < 0  x 1 - x 3 < 0  (-x 1 + 2x 3 + x 2 < 0  -x 3 < -1) Implemented in CVC, MathSAT, ICSAT, VeriFun (Boolean) (Arith.)

10 A combined BDD/FM method  Difference Decision Diagrams (Møller et al., 1999): Each path is checked for consistency with a theory specific procedure Worst case – an exponential no. of such paths x 1 – x 3 < 0 x 2 - x 3  0 x 2 -x 1 < 0 10 1 ‘Path – reduce’ Can be easily adapted to disjunctive linear arithmetic

11 Boolean Fourier-Motzkin (BFM) x 1 – x 2 < 0  x 1 – x 3 < 0  (-x 1 + 2x 3 + x 2 < 0  -x 3 < -1)  (x 1 – x 2  0)  x 1 – x 3 < 0   (-x 1 + 2x 3 + x 2  0  1  x 3 ) 1.Normalize formula: Transform to NNF Eliminate negations by reversing inequality signs (1/2)

12  : x 1 - x 2 < 0  x 1 - x 3 < 0  (-x 1 + 2x 3 + x 2 < 0  -x 3 < -1) 2. Encode: Boolean Fourier-Motzkin (BFM) 3. Perform FM on the conjunction of all predicates:  ’: e 1  e 2  ( e 3  e 4 ) x 1 – x 2 < 0 -x 1 + 2x 3 + x 2 < 0 2x 3 < 0 e1e3e5e1e3e5 e 1  e 3  e 5 Add new constraints to  ’ (2/2)

13 BFM: example e 1 x 1 – x 2 < 0 e 2 x 1 – x 3 < 0 e 3 -x 1 + 2x 3 + x 2 < 0 e 4 -x 3 < -1 e 1  e 2  (e 3  e 4 ) e 5 2x 3 < 0 e 6 x 2 + x 3 < 0 e1  e3  e5e1  e3  e5 e2  e3  e6e2  e3  e6 False 0 < -1 e 4  e 5  False  ’ is satisfiable

14 Problem: redundant constraints  : ( x 1 < x 2 – 3  (x 2 < x 3 –1  x 3 < x 1 +1)) Case splitting x 1 < x 2 – 3  x 2 < x 3 –1 x 1 < x 2 – 3  x 3 < x 1 +1 No constraints x 1 < x 2 – 3  x 2 < x 3 – 1  x 3 < x 1 + 1... constraints

15  Let  d be the DNF representation of  Solution: Conjunctions Matrices  We only need to consider pairs of constraints that are in one of the clauses of  d  Deriving  d is exponential. But –  Knowing whether a given set of constraints share a clause in  d is polynomial, using Conjunctions Matrices (1/2)

16 Conjunctions Matrices  :l 0  (l 1  (l 2  l 3 ))    l0l0 l1l1 l2l2 l3l3 l 0 l 1 l 2 l 3 l0l1l2l3l0l1l2l3 1 1 1 0 0 1 Conjunctions Matrix M :M :  Consider a pair of literals (l 0, l 1 ) only if M  [l 0, l 1 ] = 1 (2/2)

17 BFM: example e 1 x 1 – x 2 < 0 e 2 x 1 – x 3 < 0 e 3 -x 1 + 2x 3 + x 2 < 0 e 4 -x 3 < -1 e 1  e 2  (e 3  e 4 ) e 1 e 2 e 3 e 4 e1e2e3e4e1e2e3e4 1 1 1 1 1 0 e 5 2x 3 < 0 e 6 x 2 + x 3 < 0 e1  e3  e5e1  e3  e5 e2  e3  e6e2  e3  e6 e 1 e 2 e 3 e 4 e 5 e 6 e1e2e3e4e5e6e1e2e3e4e5e6 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 Saved a constraint from e 4 and e 5

18 Comparing Complexity  Total no. of constraints are denoted by: bfm – with BFM. split – with Case-Splitting. comb – with combined SAT/FM.  Claim 2: bfm  split Because of the conjunctions matrices  Claim 3: Typically, bfm << split Same pair of constraints can appear in many DNF clauses (1/2)

19 Comparing Complexity  Claim 4: The practical ratio between bfm and comb varies Theoretically, comb can generate more constraints than split Even with learning, it may generate the same constraint many times. But… due to the pruning power of SAT, comb will traverse only a small subset of the possible combinations. (2/2)

20 All the clauses that we add, are Horn clauses. Complexity of solving the SAT instance Overall complexity : Reduction SAT Claim 5: Complexity of solving the resulting SAT  ( m = # of predicates in  )

21 Experimental results – Real examples Some real examples The reason for the inconsistency (?): ICS has a more efficient implementation of Fourier-Motzkin compared to the other tools (e.g. heuristics for choosing elimination order). (1/2)

22 Experimental results – Random instances Solving the instances with Chaff – a few seconds each. Both ICS and CVC could only solve the 10x10 instance Reduction time of ‘2-CNF style’ random instances. (2/2)

23 A projection chain nn nn x 1....... x n x 1... x n-1  n-1  n-1.. x1x1 11 ²  n, ²  n-1, …, ²  1  n-1

24 The Omega Test for Presburger formulas  Input:  x n. C n  Output: C’ n-1  S n-1 In each elimination step: An adaptation of the Fourier-Motzkin method to Integer variables

25 Boolean Omega Test 1.Normalize (eliminate all negations) 2.Encode each predicate with a Boolean variable 3.Solve the conjoined list of constraints with the Omega-test: Add new constraints to  ’ inequality #1 inequality #2 inequality #3  inequality #4 e 1 e 2 e 3  e 4 e 1  e 2  e 3  e 4

26 The End

27 Experimental results (2/2) – Real examples  Seven Hardware designs with equalities and inequalities All seven solved with BFM and CVC in a few seconds Five solved with ICS in a few seconds. The other two could not be solved. The reason (?): ICS has a more efficient implementation of Fourier-Motzkin compared to PORTA On the other hand…  Standard ICS benchmarks (A conjunction of inequalities) Some could not be solved with BFM While ICS solves all of them in a few seconds.

28 Quantifier-free Presburger formulas  Some Known Techniques Branch and Bound SUP-INF (Bledsoe 1974) Omega Test (Pugh 1991) …

29 Quantifier-free Presburger formulas  Classical Fourier-Motzkin method finds real solutions x y  Geometrically, a system of real inequalities define a convex polyhedron.  Each elimination step projects the data to a lower dimension.  Geometrically, this means it finds the shadow of the polyhedron.

30 The Omega Test –Pugh (1993)  The shadow of constraints over integers is not convex. x y  Satisfiability of the real shadow does not imply satisfiability of the higher dimension.  A partial solution: Consider only the areas above which the system is at least one unit ‘thick’. This is the dark shadow.  If there is an integral point in the dark shadow, there is also an integral point above it.

31 The Omega test (2/3)  If there is no solution to the real shadow –  is unsatisfiable. Splinters  If there is an integral solution to the dark shadow –  is satisfiable.  Otherwise (‘the omega nightmare’) – check a small set of planes (‘splinters’).

32 Reduction to SAT is not the only way…  Finite domain instantiation Disjunctive linear arithmetic and its sub-theories enjoy the ‘small model property’. A known sufficient domain for equality logic: 1..n (where n is the number of variables). For this logic, it is possible to compute a significantly smaller domain for each variable (Pnueli et al., 1999). The algorithm is a graph-based analysis of the formula structure. Potentially can be extended to linear arithmetic.

33 Reduction to SAT is not the only way… Instead of giving the range [1..11], analyze connectivity: x1x1 x2x2 y1y1 y2y2 g1g1 g2g2 zu1u1 f1f1 f2f2 u2u2 Further analysis will result in a state-space of 4 Range of all var’s: 1..11 State-space: 11 11 x 1, y 1, x 2, y 2 :{0-1} u 1, f 1, f 2, u 2 : {0-3} g 1, g 2, z : {0-2} State-space: ~10 5


Download ppt "On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University."

Similar presentations


Ads by Google