Download presentation

Presentation is loading. Please wait.

Published byCarlos McGinnis Modified over 5 years ago

1
Programmed Strategies for Program Verification Richard B. Kieburtz OHSU/OGI School of Science and Engineering and Portland State University

2
Background The Programatica project –Objective is to explore methods for producing scientifically certified software Property verification provides one basis for certification P-logic is a verification logic for Haskell –The logic is directly expressive of properties for program modules written in Haskell98 –Assertions are embedded in a Haskell module Plover is an automatic verifier for P-logic –Based upon a classical logic –Implemented with Stratego –Plover implements a collection of strategies that generate and discharge verification conditions

3
Propositions, Sequents and Rules Propositional forms: –Quant([Qvars],Prop) –Equal(HTerm,HTerm) –Has(HTerm,Pred) syntax for unary predicate application –Pred ( [HTerm] ) syntax for n-ary predicate application –Conj([Prop]), Disj([Prop]), Neg(Prop), Implies([Prop],Prop) –True, False Sequent form: –Consequence( Type-env,[Prop], [Prop] ) A list of assumptions (implicit conjunction) implies a list of conclusions (implicit disjunction) in a given type environment Rule of consequence: [ Sequent ] Sequent consequent antecedents

4
From Assertions to Sequents An assertion of a property in a Haskell program module typically has the form of a quantified implication All x 1 :: t 1 … x n :: t n. Prop 1 … Prop k Prop 0 The assertion is rewritten as a logically equivalent sequent in which –Variables that are universally quantified in a prefix are bound in a type environment –Implicands are listed as assumptions in the sequent –The implicant becomes the conclusion of the sequent x 1 :: t 1, … x n :: t n, Prop 1,…, Prop k |– Prop 0

5
Rules of the Game A rule of consequence relates a finite set of antecedents to a consequent –The antecedents in a rule constitute verification conditions for entailment of the consequent A rule may contain meta-variables that range over object terms –A rule with an empty list of antecedents is an axiom Proof derivation by rewriting: –Starting from a goal sequent, apply rules of consequence repeatedly to replace the goal by simpler subgoal sequents –A proof is a tree with the goal at its root and every leaf an axiom The branches of a proof tree are instances of rules of consequence [ antecedents ] consequent

6
Soundness of rules for a logic, L Fundamental notion of soundness: –There exists a (non-trivial) model, M L, such that: –A sequent in L is true of M iff its conclusion is implied by its assumptions when interpreted in M –A rule of consequence of L is valid for M iff whenever all of its antecedents are true of M, its consequent is true of M –A proof system for L is sound iff it is comprised of rules of consequence all of which are valid for some model, M L Techniques to establish soundness of a proof system will not be addressed in this talk

7
Proof Construction by Term Rewriting Sequents, Types, Props, Preds, HTerms are all represented as terms of a multi-sorted abstract syntax –Application of a rule of consequence is enabled by pattern matching Binds meta-variables of the rule A consequent is rewritten to zero or more antecedents (verification conditions) Pattern matching may be with respect to any or all of the structure of Sequent, Prop, Pred, Type or HTerm –As a rewriting system, P-logic rules are not confluent Rules may overlap more than one rule may apply to a term The order in which rules are applied can affect whether or not an attempted proof construction can ultimately succeed –Choice of which rules to apply may be encoded in the rules themselves; which is the conditional rewriting approach; conditions imposed on instantiated variables augment pattern-matching to trigger firing of a rule –Or the choice of rules may be explicitly programmed which is the strategy-driven rewriting approach

8
Strategies + Decision Procedures = Verification Engine Rules of P-logic are implemented as rewriting strategies –From an asserted consequent, generate hypotheses sufficient for its proof Some strategies used in simplifying formulas are: –Eliminate negated assertions in sequents –Rename bound variables –Symbolic reduction of expressions –Split variables to enable reduction –If-then-else splitting –Propagate assumed equalities Decision procedures for some decidable sub-theories –Variable abstraction + congruence closure propagates equalities –A transitive, partial order models inequalities –Linear inequalities over Nats with (+) and (<) (not implemented in Plover) –Linear arithmetic over Rationals with (+) and ( ) (not implemented)

9
Programmed Strategies

10
Strategies: Rewriting + Control Elements of a strategy programming language (Stratego) Patterns are term-matching strategies –A successful match binds variables Term-builder strategies construct terms –Using variable bindings in (lexical) scope A rewrite rule combines a pattern and a term-builder –In a common, local scope Sequential composition of strategies –All component strategies succeed or the composition fails Alternatives to failure –Nondeterministic choice s + s for strategies with non-overlapping patterns or mutually exclusive conditions –Left-biased choice s <+ s for strategies with possibly overlapping patterns Strategy abstraction –Strategies may be bound as parameters to other strategies Fixed-point strategies are defined by simple recursion schemes

11
ExampleA strategy for sequents: Eliminate negated propositions Classical sequent calculus has the following bi-directional rules:, P Q,, P,, Q A strategy for elimination negated propositions from a sequent: Eliminate-the-negatives(env) : Consequence(assumptions,conclusions) -> Consequence( (neg_conclusions,pos_assumptions), (neg_assumptions,pos_conclusions)) where assumptions => pos_assumptions; p\ )>assumptions => neg_assumptions; conclusions => pos_conclusions; p\ ;substVar(env))>conclusions => neg_conclusions

12
A Meta-Strategy for terms: Rewrite to normal forms Multiplicity of normal forms –Example: Lambda calculus weak head normal form strong head normal form strong normal form –Recognizers characterize terms of a specific normal form –Transformations are rewrites of terms in an anticipated form into a desired normal form –Reduction rules transform free algebraic terms into a specific normal form Transformation rules are designed to preserve a specified interpretation of terms

13
Example: Lambda Calculus Well use Stratego notation –First, declare a signature of constructors This specifies a free term algebra constructors Var : String -> Exp Abs : String * Exp -> Exp App : Exp * Exp -> Exp –Next, we characterize several Beta-normal forms by giving recognition strategies Strategies whnf = Abs(id,id) + rec r(Var(id) + App(r,id)) shnf = rec s(Abs(id,id) + rec r(Var(id) + App(r,s))) snf = rec s(Abs(id,s) + rec r(Var(id) + App(r,s)))

14
Lambda Calculus (Contd) Reduction rules are encoded as strategies –Add a new constructor for explicit substitution constructors Let : [String * Exp] * Exp -> Exp –Beta-reduction with explicit substitution strategies Beta : App(Abs(Var(x),m),n) -> Let([(x,m)],n) where m => m LetElim = rec r({ \ Let([],n) -> n \ + \ Let([elmt | bindings],n) -> Let(bindings,n) where (elmt,n) => n \ }) Replace = rec r({ \ ((x,m),Var(x)) -> m \ + \ (bnd_pr,App(m,n)) -> App( (bnd_pr,m), (bnd_pr,n))\ + \ ((x,m),Abs(y,n)) -> Abs(y, ((x,m),n)) where (x,y) \ e \ })

15
Lambda Calculus (Contd) Three strategies for normalization Strategies BetaSubst = Beta; LetElim lazy-eval = rec r( whnf <+ App(r,id); try(BetaSubst; r)) eager-eval = rec r( shnf <+ App(r,r); try(BetaSubst; r)) strong-eval = rec r( snf <+ Abs(id,r) + App(r,r); try(BetaSubst; r))

16
Type-specific strategies

17
Notational Conventions Ill give many of the examples in this talk using Programatica notation, of which you should know: –A proposition formed by application of a unary predicate, P, to an object-language expression, e, can be written as e ::: P when e is a meta-variable, or as {e} ::: P when e is a Haskell expression –An equality proposition will be written as e1 === e2, where again, the expressions may be enclosed in curly brackets to designate Haskell syntax –In P-logic, a (unary) predicate expression prefaced with the modal operator ($) denotes a predicate that is not satisfied by { undefined } (the element in a Haskell type frame) –Univ is a universal predicate, satisfied by every expression in each type –UnDef is a predicate satisfied only by an undefined expression in each type

18
Strategies for terms of a particular form: If-then-else expressions Rules for if-then-else elimination in the conclusion of a sequent – is a list of assumptions (assumed propositions)

19
Strategies for if-then-else elimination Symbolic evaluation –Succeeds if after substitution of equalities, the Boolean predicate of an if-then-else reduces to a constant, True or False Boolean reduction entailed by assumptions –This strategy succeeds if the assumptions entail a specific valuation of the Boolean predicate An asserted conclusion may be entailed independent of the valuation of the Boolean predicate –This succeeds if the property asserted of an if-then-else is entailed for either valuation of its Boolean predicate And the Boolean expression can be proved to have a value!

20
IteElim IteElim attempts to resolve an if-then-else expression, relative to a list of assumptions –It uses a discharge strategy, given as a parameter, to attempt discharge of generated proof obligations IteElimThen : (HIte(b,e1,e2),assumptions) -> (e1,[Consequence(assumptions,[Has(b,Strong(Univ))]), Consequence(assumptions,[ b])]) IteElimElse : (HIte(b,e1,e2),assumptions) -> (e2,[Consequence(assumptions,[Has(b,Strong(Univ))]), Consequence(assumptions, [ HApp(HVar( not ),b)])]) IteElim(discharge) = {?ite; IteElimThen; \ (t,[strengthAssertion,valueAssertion]) -> t where test( strengthAssertion); ( valueAssertion; !t <+ !ite; IteElimElse; \ (t_,[_,negValueAssertion]) -> t_ where negValueAssertion \ ) => t \ }

21
The Bool-to-prop translation To show that an expression b, of type Bool, resolves to True in the current context, transform it to a logical proposition, b –Requires verification of a side condition, b ::: $Univ, for soundness –Bool-to-prop analyzes the structure of a term, generating propositions from its Bool-typed parts Bool-to-prop = rec r ( \ HVar(x) -> Equal(HVar(x),HCon("True",[])) \ + \ HApp(HVar("not"),HVar(x)) -> Equal(HVar(x),HCon("False",[])) \ + \ HApp(HVar("not"),HApp(HVar("not"),b)) -> b \ + \ HApp(HVar("not"),HApp(HApp(HVar(op),x),y)) -> HApp(HApp(HVar( op),x),y) \ + \ HApp(HApp(HVar("||"),b1),b2) -> Disj([ b1, b2]) \ + \ HApp(HApp(HVar("&&"),b1),b2) -> Conj([ b1, b2]) \ + \ HApp(HApp(HVar("=="),x),y) -> Equal(x,y) \ + \ HApp(HApp(HVar("/="),x),y) -> Neg(Equal(x,y)) \ + \ HApp(HApp(HVar(" Has(y,LiftedSec(HApp(HVar("<"),x))) \ + … )

22
IteSplit IteElim is a powerful strategy –But it depends upon having assumptions strong enough to resolve the Boolean predicate of an if-then-else expression When IteElim fails, try IteSplit –illustrated here for an if-then-else in an equality proposition –First, assume the boolean predicate of an if-then-else to be True, and attempt to prove that the property asserted of the if-then-else holds for its then expression –Next, assume the boolean predicate to be False, and attempt to prove the asserted property holds of the else expression

23
Decision Procedures

24
Programming languages embed algebraic types Datatypes define (almost) free algebras Bool, with operators (&&), (| |), (not) –Has decidable, non-empty theory (ref. Haskell language defn) Arithmetic algebras (Integer, Float) –equivalence of expressions is undecidable but there are decidable subtheories Instances of Monad, and other classes –classes, such as Eq and Ord, enrich the algebras of instance types (==), (>), (<=) The theory associated with the class enriches the theories of its derived instances

25
Decision Procedures In many simple theories, equivalence of expressions is decidable. Some examples: –Simply-typed lambda calculus (without -rules) –Pressberger arithmetic –Linear inequalities over Nats with (+) and (<) –Linear arithmetic over Rationals with (+) and ( ) An algorithmic decision procedure can calculate truth or falsity of a proposition in a decidable theory –A decision procedure operates on a model for the theory which can be a term model but a concrete model may yield a more efficient algorithm

26
Cooperating Decision Procedures The union of two decidable theories may or may not be decidable –If their union is decidable, they have a common model Its not always easy to discover a common model –Decision procedures for two theories in a common model are said to cooperate They can interact with one another on objects of the model Cooperating decision procedures can multiply deductive power –Through mutual interaction to simplify complex deductive problems –Through interaction with rewrite strategies on a term model Requires an injective map from terms to the decision-procedure model

27
Embedding a Decision Procedure for Equality A list of assumptions often contains a number of term equalities –Q: How can a finite set of equalities assumed in a sequent be propagated to all relevant subterms in its conclusion? –A: By a congruence closure algorithm (in four steps) Step 1: Variable abstraction –Construct an environment, map every term that is an argument of an equality or of a function symbol in an assumption to a unique, fresh identifier the environment mapping is injective Step 2: Rewrite the assumptions, replacing terms by variables, using the environment constructed at step 1. –Every equation now has the form HVar(i) = HVar(j)

28
Propagating Equalities (Contd) Step 3: Orient the equations and calculate their symmetric, transitive closure –maps each variable to a unique representative of its equivalence class. (Union-Find algorithm) –Orient the equations so that the inverse mapping of each class representative is a term in normal form (when possible) so that reductions are not reverted under inversion of the environment map Step 4: Rewrite the conclusion of the sequent –each term (subterm) in the range of the environment is replaced by the unique variable representing its equivalence class. the conclusion manifests all equalities assumed in the sequent Set of equalities to be propagated may be enriched whenever the set of assumptions is enriched –Equality propagation cooperates with other strategies

29
Propagating Equalities An Example {a}==={(u,v)}, {b}==={a} { ( \(x,y) x) b} === {u} Step 1: Environment map is constructed [((u,v),a)] (association list) Step 2: Assumed equalities are extracted from assumptions {a}==={a}, {b}==={a} Step 3: Equivalence classes are calculated: [a,b] rep a Step 4: Variables are replaced by equivalence-class representatives [((u,v),a)], {a}==={a}, {a}==={a} { ( \(x,y) x) a} === {u}

30
Example Continued Enabling Reduction Inverting the environment map reveals a normal-form term –the map inversion strategy is triggered by a term that applies a patterned abstraction to a variable [((u,v),a)], {a}==={a}, {a}==={a} { ( \(x,y) x) (u,v)} === {u} The argument with 2-tuple structure matches the pattern, enabling a reduction rule [((u,v),a)], {a}==={a}, {a}==={a} { u} === {u} A reflexive-equality test strategy recognizes the sequent as valid

31
Persistent representation of equivalence classes Uses the Stratego tables library strategies init-eq-class = "eq-class"; "eq-unions"; ("eq-unions",0,[]) eq-rep = rec r({?x; ( ("eq-class",x);r + !x)}) eq-union = {?(x,y); x => x'; y => y'; (!(x',y');eq ("eq-class", x', y'); ("eq-unions",0, Cons((x',y'), ("eq-unions",0))))} A stack of tables supports nested scopes of eq-classes enter-scope = ("eq-unions",0,[]) leave-scope = ("eq-unions",0); map({?(x,_); ("eq-class",x)}) try-in-scope(s) = {?x; (enter-scope; x => y; leave-scope; !y + leave-scope; fail)} Decision procedures build a model for efficient evaluation

32
Harder equality problems Intensional equality of abstractions –Is only partially decidable when it includes arithmetic A partial strategy: eliminate the abstractions assumptions { \x –> e 1 } === { \y –> e 2 } –Using the rule,, generates the new sequent: assumptions, {z } ::: $Univ {(\x –> e 1 ) z } === {(\y –> e 2 ) z } where z is a fresh object variable –Reduce the applications on each side of the equality This strategy subsumes Eta-reduction and Alpha-conversion –Attempt to prove equality of the reduced expressions

33
Generic Strategies

34
Induction rules for fixed-point predicates Fixed-point (Scott) induction –Given a (simply recursive) Haskell definition m = tm where m is a simple variable pattern, –A rule to prove an assertion m ::: P is: tm[undefined/m] ::: P, {m} ::: P tm ::: P, {m} === tm {m} ::: P where m has no free occurrence in

35
Structural Induction Schemas The fixed-point induction rule can be generalized to definitions with patterns that bind multiple variables –These patterns arise from the constructors of declared data types –Example: list induction The list data type constructor is implicitly defined in Haskell data [ a ] = [ ] | a : [ a ] A list induction rule is list induction is sound for all lists finitely constructed with [ ] and (:) –But infers no result for infinite or incomplete lists! Must we program a specific strategy for each inductive data type declaration? –Or can we design a generic strategy schema?

36
A Schema for Structural Induction Structural induction is a free theorem consequent to a (simply recursive) datatype definition (a free term algebra) –Soundness is justified by parametricity a meta-theorem data T a 1 … a n = i 1..m C i i,1 i,k i where each i,j {a 1,…,a n,T a 1 … a n } –A table-building strategy can interpret a data definition, entering into a symbol table the arity, types, and strictness of the constructors Leading to a derived induction rule for T a 1 … a n … [x i,1 :: i,1,…, x i,k i :: i,k i ],, i,j =T a 1 …a k i {x i,j }::: P |– {C i x i,1 …x i,k i } ::: P … |– z :: T a 1 …a k i. z ::: $Univ z ::: P which can be programmed as a generic strategy schema –But this represents a good deal of code and I wont attempt to show it here

37
Induction rules for fixed-point predicates To prove an Lfp property, the following rule is sound, {m} ::: X {m} ::: UnDef tm ::: P, {m} === tm {m} ::: Lfp X. P where X, m have no free occurrence in To prove a Gfp property, the following rule is sound, tm ::: $P {m} ::: X, {m} === tm {m} ::: Gfp X. P where X, m have no free occurrence in

38
Instances of Rule Schemas Ordinary rules bind free meta-variables –These variables are implicitly universally quantified –Binding occurs in pattern-matching Fixed rule schemas (lemmas) have the form of universally quantified implications –When a fixed rule schema appears in the assumptions of a goal sequent Match the implicant of a schema to a conclusion of the goal sequent, binding quantified variables –Implicands become conclusions of new verification conditions Match the implicands of a schema to propositions assumed in the goal sequent –The implicant of the schema becomes an implied assumption Generic schemas (structural induction, for example) can be elastic in form –Match types to fix the structure of the schema Then match to a goal sequent as with any fixed rule schema

39
Wrapping Up

40
The Complexity Question There are two aspects of proof search that multiply complexity –Use of strategies that have a large number of alternatives Not as great a problem as it might seem Most unsuccessful paths fail quickly, in pattern-matching –Generating multiple alternative verification conditions Disjunctive assumptions or Conjunctive conclusions –Avoid implications in the statement of individual assumptions A few strategies, such as IteSplit, generate potentially costly alternatives, which may have low probability of success –Reserve use of such strategies to a last resort –Avoid splitting on case expressions with more than two branches Compound boolean expressions, with (&&) and (| |) can be costly to resolve –Difficult to prune –Nested if-then-else expressions can limit rate of complexity growth

41
Conclusions Reasons to build a custom verifier –Afford users a verification logic for a specific programming language Mitigates need for a deep embedding to interpret object language semantics –Automate verification of simple program modules Automatic verification of mundane properties is quite useful Reasons not to do so –Soundness of the verifier is not easily established –Implementation is large and complex But if you decide to do it … –Strategy programming is powerful, yet reasonably transparent –Cooperating decision procedures can be powerful and efficient –Focus on getting the basics right before tackling recursion and higher-order constructs

42
End

Similar presentations

OK

Strategies for Verification Dick Kieburtz OGI School of Science & Engineering July 15, 2003.

Strategies for Verification Dick Kieburtz OGI School of Science & Engineering July 15, 2003.

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google