Hoare Logic LN chapter 5, 6 but without 6. 8, 6. 12, 6

Slides:



Advertisements
Similar presentations
Automated Theorem Proving Lecture 1. Program verification is undecidable! Given program P and specification S, does P satisfy S?
Advertisements

In this episode of The Verification Corner, Rustan Leino talks about Loop Invariants. He gives a brief summary of the theoretical foundations and shows.
Semantics Static semantics Dynamic semantics attribute grammars
Reasoning About Code; Hoare Logic, continued
Hoare’s Correctness Triplets Dijkstra’s Predicate Transformers
Rigorous Software Development CSCI-GA Instructor: Thomas Wies Spring 2012 Lecture 11.
Axiomatic Verification I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 17.
Partial correctness © Marcelo d’Amorim 2010.
Axiomatic Semantics The meaning of a program is defined by a formal system that allows one to deduce true properties of that program. No specific meaning.
Copyright © 2006 Addison-Wesley. All rights reserved.1-1 ICS 410: Programming Languages Chapter 3 : Describing Syntax and Semantics Axiomatic Semantics.
ISBN Chapter 3 Describing Syntax and Semantics.
Predicate Transformers
1/22 Programs : Semantics and Verification Charngki PSWLAB Programs: Semantics and Verification Mordechai Ben-Ari Mathematical Logic for Computer.
Copyright © 2006 The McGraw-Hill Companies, Inc. Programming Languages 2nd edition Tucker and Noonan Chapter 18 Program Correctness To treat programming.
4/17/2017 Section 3.6 Program Correctness ch3.6.
Describing Syntax and Semantics
Floyd Hoare Logic. Semantics A programming language specification consists of a syntactic description and a semantic description. Syntactic description:symbols.
Chapter 3 (Part 3): Mathematical Reasoning, Induction & Recursion  Recursive Algorithms (3.5)  Program Correctness (3.6)
© by Kenneth H. Rosen, Discrete Mathematics & its Applications, Sixth Edition, Mc Graw-Hill, 2007 Chapter 4 (Part 3): Mathematical Reasoning, Induction.
1 Inference Rules and Proofs (Z); Program Specification and Verification Inference Rules and Proofs (Z); Program Specification and Verification.
1 Formal Semantics of Programming Languages “Program testing can be used to show the presence of bugs, but never to show their absence!” --Dijkstra.
CS 363 Comparative Programming Languages Semantics.
Reasoning about programs March CSE 403, Winter 2011, Brun.
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 4: Axiomatic Semantics I Roman Manevich Ben-Gurion University.
Chapter 3 Part II Describing Syntax and Semantics.
Semantics In Text: Chapter 3.
COP4020 Programming Languages Introduction to Axiomatic Semantics Prof. Robert van Engelen.
13 Aug 2013 Program Verification. Proofs about Programs Why make you study logic? Why make you do proofs? Because we want to prove properties of programs.
Principle of Programming Lanugages 3: Compilation of statements Statements in C Assertion Hoare logic Department of Information Science and Engineering.
Example of a Worked-out Proof. Consider again this program Let 0  n. This program checks if all elements of the segment a[0..n) are zero. {* 0  n *}
Static Techniques for V&V. Hierarchy of V&V techniques Static Analysis V&V Dynamic Techniques Model Checking Simulation Symbolic Execution Testing Informal.
Cs7100(Prasad)L18-9WP1 Axiomatic Semantics Predicate Transformers.
Hoare Logic LN chapter 5, 6 but without 6.8, 6.12, 6.13 (to be discussed later) Hoare Logic is used to reason about the correctness of programs. In the.
11/22/2016IT 3271 A formal system:Axioms and Rules, for inferring valid specification x := m; y := n; while ¬(x=y) do if x>y then x := x-y else y := y-x.
Chapter 4 (Part 3): Mathematical Reasoning, Induction & Recursion
Weakest Precondition of Unstructured Programs
Further with Hoare Logic Sections 6.12, 6.10, 6.13
Spring 2017 Program Analysis and Verification
Reasoning About Code.
Reasoning about code CSE 331 University of Washington.
Revisiting Predicate Logic LN chapters 3,4
Proving Loops Testing debugging and verification
Formal Methods in Software Engineering 1
Functional Verification III
Mathematical Structures for Computer Science Chapter 1
Copyright © Cengage Learning. All rights reserved.
Lecture 5 Floyd-Hoare Style Verification
Axiomatic semantics Points to discuss: The assignment statement
Programming Languages and Compilers (CS 421)
Programming Languages 2nd edition Tucker and Noonan
Axiomatic Verification II
Semantics In Text: Chapter 3.
Hoare Logic LN chapter 5, 6 Hoare Logic is used to reason about the correctness of programs. In the end, it reduces a program and its specification to.
Some heuristics for deriving invariant LN sections 6.7 , 6.8
Example of a Worked-out Proof
Axiomatic Verification II
Axiomatic Verification I
Predicate Transformers
Formal Methods in software development
Axiomatic Semantics Will consider axiomatic semantics (A.S.) of IMP:
Output Variables {true} S {i = j} i := j; or j := i;
Axiomatic Verification I
Functional Verification III
Program correctness Axiomatic semantics
Program Verification with Hoare Logic
Programming Languages and Compilers (CS 421)
Programming Languages 2nd edition Tucker and Noonan
COP4020 Programming Languages
Logic for Program Call chapter 7
Hoare Logic LN chapter 5, 6, 9.3 Hoare Logic is used to reason about the correctness of programs. In the end, it reduces a program and its specification.
Presentation transcript:

Hoare Logic LN chapter 5, 6 but without 6. 8, 6. 12, 6 Hoare Logic LN chapter 5, 6 but without 6.8, 6.12, 6.13 (to be discussed later) Hoare Logic is used to reason about the correctness of programs. In the end, it reduces a program and its specification to a set of verifications conditions.

Overview Part I Hoare triple Rules for basic statements // SEQ, IF, ASG Weakest pre-condition Example Part II : loop Logic for unstructured programs

Part 1 : reasoning about basic statements

Hoare triple A simple way to specify the relation between a program’s end and initial states. {* y0 *} program(x,y) {* return = x/y *} {* P *} statement {* Q *} P  pre-condition; Q  post-condition. Total correctness : if the program is executed on a state satisfying P, it will terminate in a state satisfying Q. Partial correctness : if the program is executed on a state satisfying P, and if it terminates, it will terminate in a state satisfying Q. There are other kinds of correctness-related properties which are difficult to express in Hoare triples.

Difficult to capture in classical Hoare triples When specifying the order of certain actions within a program is important: E.g. CSP When sequences of observable states through out the execution have to satisfy certain property: E.g. Temporal logic When the environment cannot be fully trusted: E.g. Logic of belief

Rule for SEQ composition {* P *} S1 {* Q *} , {* Q *} S2 {* R *} ----------------------------------------------------------------- {* P *} S1 ; S2 {* R *} Suppose P s, prove (S1;S2) s subset R Prove for all u in (S1;S2) s, implies R u u in UNION(S2 t | t in S1 s} (exists t : t in S1 s : u in S2 t) [some t] t in S1 s /\ u in S2 t Q t R u done 6

Rule for IF {* P /\ g *} S1 {* Q *} , {* P /\ g *} S2 {* Q *} -------------------------------------------------------------------------- {* P *} if g then S1 else S2 {* Q *} 7

?? -------------------------------- Rule for Assignment ?? -------------------------------- {* P *} x:=e {* Q *} Find a pre-condition W, that holds before the assignment, if an only if Q holds after the assignment. Then we can equivalently prove P  W Semantic: (x := e) s = { (\v. if v=x then e s else s x) } Semantic: Q[e/x] = { (\v. if v=x then e s else s x) | s in Q } Q ((x := e) s) = Q (\v. if v=x then e s else s x) = (exists t. t in Q /\ t = (\v. if v=x then e s else s x)) = s in Q[e/x] 8

Assignment, examples {* 10 = y *} x:=10 {* x=y *} {* x+a = y *} x:=x+a {* x=y *} So, W can be obtained by Q[e/x] 9

Assignment Theorem: Q holds after x:=e iff Q[e/x] holds before the assignment. The “proof rule” : {* P *} x:=e {* Q *} = P  Q[e/x] 10

How does a proof proceed now ? {* xy *} tmp:= x ; x:=y ; y:=tmp {* xy *} Rule for SEQ requires you to come up with intermediate assertions: {* xy *} tmp:= x {* ? *} ; x:=y {* ? *} ; y:=tmp {* xy *} What to fill ?? 11

Weakest Pre-condition (wp) Imagine we have this function: wp : Stmt  Pred  Pred such that wp S Q gives a valid pre-cond: executing S in any state in any state in this pre-cond results in states in Q. Partial correctness  S is assumed to terminate Total correctness  the weakest pre-condition must also guarantees that S terminates. Even better: let wp construct the weakest valid pre-cond. 12

Weaker and stronger If p  q is valid, we say that p is stronger than q Conversely, q is weaker than p. A state-predicate p can also be seen as characterizing a set of states, namely those states on which p is true. “p is stronger than q” is the same as saying, in terms of set, that “p is subset of q“.

Weakest pre-condition We can “characterize” wp, as follows: {* P *} S {* Q *}  P  wp S Q Does this define a valid pre-cond? Yes, you can prove: {* wp S Q *} S {* Q *} Corollary: this reduces program verification problems to proving implications! Corollary: the reduction is complete. That is, if the implication above is not valid, so is the specification. 14

Weakest pre-condition But the previous def. is not a constructive one: we still don’t know how to construct this wp Possible for assignments, if-then-else, ... Not possible if the code of S is not known Not possible for loops and recursions wp skip Q = Q wp (x:=e) Q = Q[e/x] 15

wp of SEQ wp (S1 ; S2) Q = wp S1 (wp S2 Q) S1 S2 V Q W = wp S1 W 16

wp (if g then S1 else S2) Q = (g /\ wp S1 Q) \/ (g /\ wp S2 Q) wp of IF wp (if g then S1 else S2) Q = (g /\ wp S1 Q) \/ (g /\ wp S2 Q) V = wp S1 Q Other formulation : S1 (g  wp S1 Q) /\ (g  wp S2 Q) Q W S2 g g = wp S2 Q 17

How does a proof proceed now ? {* xy *} tmp:= x ; x:=y ; y:=tmp {* xy *} Calculate: W = wp (tmp:= x ; x:=y ; y:=tmp) xy Then prove: xy  W We calculate the intermediate assertions, rather than figuring them out by hand! 18

Example  wp (x:=e) Q = Q[e/x] {* 0i /\ ¬found /\ (found = (k : 0k<i : a[k]=x)) *} found := a[i]=x ; i:=i+1 {* found = (k : 0k<i : a[k]=x) *}  (a[i]=x) = (k : 0k<i+1 : a[k]=x) found = (k : 0k<i+1 : a[k]=x) wp (x:=e) Q = Q[e/x]

Some notes about the verification approach In our training, we prove P  W by hands. In practice, to some degree this can be automatically checked using e.g. a SAT solver. It checks if: (P  W) is satisfiable If it is, then P  W is not valid. If the calculation of W was “complete”, the witness of the satisfiability of (P  W) is essentially an input that will expose a bug in the program  useful for debugging.

Can we prove the correctness of our inference rules? First, give reasonable “models” of the concepts involved. What is a statement ? type Stmt = State  State but with this we can only model deterministic programs. How about: type Stmt = State  set of State such that stmt s describes the set of all its possible final states.

Meta What are “;” and “if-then” ? What is a pre-/post condition .e.g x0 we can model it by a function of type State  bool What is Hoare triple? (partial correctness) (if g then S1 else S2 ) s = if g s then S1 s else S2 s (S1 ; S2 ) s =  {S2 t | t  S1 s } {* P *} S {* Q *} = (s: P s : ( t : t S s : Q t))

Example, proving rules Post-condition weakening rule : [A1:] (s: P s : ( t : t S s : Q s)) [A2:] (s:: Q s  R s ) [G:] (s: P s : ( t : t S s : R s)) Pretty straight forward... {* P *} S {* Q *} , Q  R -------------------------------------------- {* P *} S {* R *}

Btw, few more rules Pre-condition strengthening: Conjunction and disjunction {* Q *} S {* R *} , P  Q -------------------------------------------- {* P *} S {* R *} {* P1 *} S {* Q1 *} , {* P2 *} S {* Q2 *} -------------------------------------------------------------------------- {* P1 /\ P2 *} S1 {* Q1 /\ Q2 *} {* P1 \/ P2 *} S1 {* Q1 \/ Q2 *}

Part II : reasoning about loops

How to prove this ? {* P *} while g do S {* Q *} Calculate wp first ? Unfortunately, for loop we can’t. Idea : give me a predicate that S will always establish at the end of every iteration.

Idea {* P *} while g do S {* Q *} Try to come up with a predicate I that holds at the end of every iteration. iter1 : // g // ; S {* I *} iter2 : // g // ; S {* I *} … itern : // g // ; S {* I *} // last iteration! exit : // g // I /\ g holds as the loop exit! So, to establish postcond Q, sufficient to prove: I /\ g  Q

Ok, what kind of predicate is that?? while g do S I has to holds at the end of each iteration Sufficient to prove: {* I /\ g *} S {* I *} … S {* I *} // g // S {* I *} iter i+1 iter i Except for the first iteration !

Idea {* P *} while g do S For the first iteration : Additionally we need : P  I Recall the condition: {* I /\ g *} S {* I *} {* P *} {* I *} // g // S {* I *} Iter1 We know this from the given pre-cond

To Summarize Capture this in an inference rule: P  I // setting up I {* g /\ I *} S {* I *} // invariance I /\ g  Q // exit cond ---------------------------------------- {* P *} while g do S {* Q *} This rule is only good for partial correctness though. an “I” satisfying the second premise above is called invariant.

Few things to note An I satisfying the 2nd and 3rd conditions of the previous rule is always a valid pre-condition of the loop, towards the post-cond Q. That is, we also have this rule: Indeed, I is not necessarily the weakest one. The previous rule can just as well be obtained from the above rule + pre-condition weakening. Nothing prevent us from chaining I into wp calculation, although the resulting predicate may then not be the weakest pre-cond. {* g /\ I *} S {* I *} I /\ g  Q ---------------------------------------- {* I *} while g do S {* Q *}

Examples Prove (partial correctness) : {* true *} while in do i++ {* i=n *} Prove: {* i=0 /\ n=10 *} while i<n do i++ {* i=n *}

Examples Prove: {* i=0 /\ s=0 *} while i<n do { s = s+2 ; i++ } {* isEven(s) *} Prove (will return to this later) : {* 0n *} i := 0 ; r := true ; while i<n do { r := r /\ (a[i]=0) ; i++ } {* r = (k : 0k<n : a[k]=0) *}

Proving termination {* P *} while g do S {* Q *} Idea: come up with an integer expression m, satisfying : At the start of every iteration m > 0 Each iteration decreases m These imply that the loop will terminates.

Capturing the termination conditions At the start of every iteration m > 0 : g  m  0 If you have an invariant: I /\ g  m > 0 Each iteration decreases m : {* I /\ g *} C:=m; S {* m<C *}

To Summarize P  I // setting up I {* g /\ I *} S {* I *} // invariance I /\ g  Q // exit cond {* I /\ g *} C:=m; S {* m<C *} // m decreasing I /\ g  m > 0 // m bounded below ---------------------------------------- {* P *} while g do S {* Q *} Since we also have this pre-cond strengthening rule: P  I , {* I *} while g do S {*Q*} ------------------------------------------------------ {* P *} while g do S {* Q *}

Lec notes often refer to this rule {* g /\ I *} S {* I *} // invariance I /\ g  Q // exit cond {* I /\ g *} C:=m; S {* m<C *} // m decreasing I /\ g  m > 0 // m bounded below ---------------------------------------- {* I *} while g do S {* Q *}

Examples {* in *} while i<n do i++ {* true *} {* true *} while in do i++ {* true *} {* reds=100 /\ blues=100 } while reds>0 \/ blues>0 do { if reds>0 then { reds-- ; blues += 2 } else { blues-- } } {* true *} n – 1 can’t guarantee termination 3reds + blue, Require I: reds>=0 /\ blues>=0

Structuring the proof(s) Recall this example: {* 0n *} i := 0 ; r := true ; while i<n do { r := r /\ (a[i]=0) ; i++ } {* r = (k : 0k<n : a[k]=0) *} Chosen invariant & termination metric: I : (r = (k : 0k<i : a[k]=0)) /\ 0in m : n - i I1 I2

It comes down to proving these (Exit Condition) I /\ g  Q (Initialization Condition) {* given-pre-cond *} i := 0 ; r := true {* I *}, Equivalently : given-pre-cond  wp (i := 0 ; r := true) I (Invariance) I /\ g  wp body I (Termination Condition) I /\ g  wp (C:=m ; body) (m<C) (Termination Condition) I /\ g  m>0

Top level structure of the proofs 1 PROOF PEC [A1] r = (k : 0k<i : a[k]=0) [A2] 0in [A3] in [G] r = (k : 0k<n : a[k]=0) .... (the proof itself) PROOF PInit [A1] 0n [ G ] (true = (k : 0k<0 : a[k]=0)) /\ 00n // calculated wp

Proof of init PROOF PInit [A1] 0n [ G ] (true = (k : 0k<0 : a[k]=0)) /\ 00n // calculated wp ----------- 1. { follows from A1 } 00n 2. { see subproof } true = (k : 0k<0 : a[k]=0) Equational proof (k : 0k<0 : a[k]=0) = { the domain is false } (k : false : a[k]=0)) = { over empty domain } true end 3. { conjunction of 1 and 2 } G end

Top level structure of the proofs 2 PROOF PIC [A1] r = (k : 0k<i : a[k]=0) [A2] 0in [A3] i<n [G1] r/\(a[i]=0) = (k : 0k<i+1 : a[k]=0) [G2] 0i+1n ---------------------------------------------------------------------- 1. { see eq. subproof below } G1 EQUATIONAL PROOF ---------------------------- (k : 0k<i+1 : a[k]=0) = { dom. merge , PIC.A2 } (k : 0k<i \/ k=i : a[k]=0) = {  domain-split } (k : 0k<i: a[k]=0) /\ (k : k=i : a[k]=0) = { PIC.A1 } r /\ (k : k=i : a[k]=0) = { quant. over singleton } r /\ (a[i]=0) END ------------------------------------------------------- 2. ... .... END

Top level structure of the proofs 3 PROOF TC1 [A1] r = (k : 0k<i : a[k]=0) [A2] 0in [A3] in [G] n - (i+1) < n-1 // calculated wp PROOF TC2 [A1] r = (k : 0k<i : a[k]=0) [A2] 0in [A3] in [G] n - i > 0

Reducing a program spec to a statement spec {* x > 0 *} P(x:int) { x++ ; return ... } {* return = x+1 *} Things to take into account : How are parameters passed? in uPL  by-value and by-copy-restore only return at the end is allowed in uPL  its value is represented by an assignment to a dummy variable “return” The above reduces to: {* x > 0 *} X:= x ; x++ ; return := ... {* return = X+1 *} We introduce an auxiliary variable X to represent x’s original value.

Reducing a program to its body How about this : {* y-1 *} P(x, OUT y) { y++ ; x++ ; return x/y } {* return = (x+1)/y *} It makes sense to agree that y in the post-cond refers to its new value, whereas x to its old value. Reduce to: {* y-1 *} X:=x; y++; x++ ; return :=x/y {* return = (X+1)/y *}

Unstructured programs “Structured” program: the control flow follows the program’s syntax. Unstructured program: if y=0 then goto red ; x := x/y ; red: S2 The “standard” Hoare logic rule for sequential composition is broken! Exceptions and “return” in the middle create a similar issue.

Adjusting Hoare Logic for Unstructured Programs represented by a graph of guarded assignments x<n  x++ x<n  x++ 2 0  x  n x=n  skip x=n  skip 3 Node represents “control location” Edge is an assignment that moves the control of S, from one location to another. An assignment can only execute if its guard is true. x=n

Floyd Assertion Network x<n  x++ 0  x  n x<n  x++ 2 0  x  n 0  x  n x=n  skip x=n  skip 3 x = n x=n This works the same if you have cycle; however such an assertion network alone won’t prove the termination. To prove termination you need to come up with a termination metric, with a lower bound, and to prove that each transition in the graph will decrease the value of the metric. Decorate nodes with assertions. Prove enter and exit Prove for each edge, the corresponding Hoare triple.

Handling exception and return-in-the-middle With Flyod we can handle those. if g then { S ; return } T ; return ; try S catch T ; S g T g T S