Presentation is loading. Please wait.

Presentation is loading. Please wait.

Verification of Concurrent Programs

Similar presentations


Presentation on theme: "Verification of Concurrent Programs"— Presentation transcript:

1 Verification of Concurrent Programs
Part 2: Proof systems for concurrent programs

2 Outline Recap of Hoare proofs The Owicki-Gries method
Modular Proofs: Rely-Guarantee proofs Atomicity: Lipton’s Reduction

3 Recap of Hoare proofs

4 Hoare logic Presented in Floyd/Hoare in the late 60s
Similar ideas in a 1949 paper by Turing Main idea: Build program proofs following the syntactic structure, i.e., proof rules for control-flow structures { φ } C {ψ} If execution of C begins in a state satisfying φ, on termination, the final state satisfies ψ.

5 A sequential language: IMP
P ::= skip x := expr P1 ; P2 if (cond) then P1 else P2 while (cond) P1 Here, cond is a Boolean expression over program variables and expr is any expression over program variables.

6 Floyd/Hoare proof rules: Atomic statements
NOOP {φ} skip {φ} ASSIGN {φ[x/e]} x := e {φ} Example: { y + 3 > 0 } x = 3 { x + y > 0 }

7 Floyd/Hoare proof rules: If
{ φ ⋀ C } P { ψ } {φ ⋀ ┐ C} P’ {ψ} IF {φ} if (C) P else P’ {ψ} Examples: { x != 0 ⋀ x > 0 } skip { x > 0 } { x != 0 ⋀ x ≤ 0 } x = -1 * x { x > 0 } { x != 0 } if (x > 0) skip; else x = -1 * x { x > 0 } { x % 2 == 0 } skip { x % 2 == 0 } { x % 2 != 1 } x := x + 1 { x % 2 == 0 } { true } if (x % 2 == 0) skip; else x := x + 1 { x % 2 == 0 }

8 Floyd/Hoare proof rules: While
{ φ ⋀ C } P { φ } WHILE {φ} while (C) P {φ ⋀ ¬ C} Examples: { x ≥ 0 ⋀ x > 0 } x := x - 1{ x ≥ 0 } { x ≥ 0 } while (x > 0) x = x – 1; { x = 0 } { x ≥ 0 ⋀ x = y ⋀ x > 0 } x--; y--; { x ≥ 0 ⋀ x = y } {x ≥ 0 ⋀ x = y} while (x>0) x--;y--; { x ≥ 0 ⋀ x = y ⋀ x ≤ 0}

9 Floyd/Hoare proof rules: Rest
{ φ } P { ψ’ } {ψ’ } Q { ψ } { φ } P; Q { ψ } Examples: { x≥0⋀ x = y } x--; y--; { x ≥ 0 ⋀ x = y ⋀ x ≤ 0} { x ≥ 0 ⋀ x = y } while (x > 0) x--; y--; { y = 0 }

10 Floyd/Hoare proof: example
if (bal > 1000) cs := cs + 100 else cs := cs + 0 Proof goal: {cs = 0} P { cs ≥ 100  bal > 1000 } {cs +100 ≥ 100  bal > 1000} cs := cs +100 {cs ≥100  bal > 1000} { cs = 0 ⋀ bal > 1000 } cs := cs +100 { cs ≥ 100  bal > 1000 } { cs ≥ 100  bal > 1000} cs := cs + 0 { cs ≥ 100  bal > 1000 } { cs = 0 ⋀ bal ≤ 1000 } cs := 0 { cs ≥ 100  bal > 1000 }

11 Floyd/Hoare proof: example
if (bal > 1000) cs := cs + 100 else cs := cs + 0 Proof goal: {cs = 0} P { cs ≥ 100  bal > 1000 } { cs = 0 ⋀ bal > 1000 } cs := 100 { cs ≥ 100  bal > 1000 } { cs = 0 ⋀ bal ≤ 1000 } cs := 0 { cs ≥ 100  bal > 1000 } { cs = 0 } if (bal>1000) cs:=cs+100 else cs:=cs+0 { cs ≥ 100  bal > 1000 }

12 Floyd/Hoare proofs Floyd/Hoare proofs are sound
If you prove { φ } P { ψ }, every terminating execution of P starting from a state satisfying φ ends in a state satisfying ψ. Floyd/Hoare proofs are relatively complete For every program P such that every terminating execution of P starting from a state satisfying φ ends in a state satisfying ψ, we can prove { φ } P { ψ } provided the underlying logic is complete

13 A concurrent language: IMP
P ::= skip x := expr P1 ; P2 if (cond) then P1 else P2 while (cond) P1 P1 ║ P2 // parallel composition [ P1 ] // atomic await cond then P1 // conditional atomic

14 Owicki-Gries proofs

15 Owicki-Gries method: Introduction
First extension of Floyd/Hoare proofs to shared-memory concurrency Presented by Owicki and Gries in mid 70s New rules for concurrency constructs Simple idea: Can compose proofs as long as they don’t interfere with each other

16 Simple things first… { φ } P { ψ } ATOMIC {φ} [P] {ψ} { φ ⋀ C } [P] { ψ } COND_ATOMIC {φ } await (C) then P { ψ } Executing something atomically is the same as executing it in a sequential context Conditional atomic waits for its condition to be true and then executes in a sequential context

17 Simple things first… { φ ⋀ C } [P] { ψ } COND_ATOMIC
Compare the conditional atomic rule to the if rule Intuitively, a conditional atomic is an if that waits for its condition. { φ ⋀ C } [P] { ψ } COND_ATOMIC {φ } await (C) then P { ψ } { φ ⋀ C } P { ψ } { φ ⋀ ┐ C } P’ { ψ } IF {φ } if (C) then P else P’ { ψ }

18 Example: A lock Correctness: lock(l): await lock == 0 then
lock = tid unlock(l): await lock == tid then lock = 0 Correctness: If lock(l) succeeds, current thread holds the lock If unlock(l) succeeds, no one holds the lock

19 Example: Verifying a lock
{ tid = tid } lock = tid { lock = tid } { true ⋀ lock = 0 } lock = tid { lock = tid } { true } await lock = 0 then lock = tid { lock = tid } Unlock: { 0 = 0 } lock = 0 { lock = 0 } { lock = tid ⋀ 0 = 0 } lock = 0 { lock = 0 } { true } await lock = tid then lock = 0 { lock = 0 }

20 Towards a parallel compostion rule
Attempt 1: Prove using standard Floyd/Hoare rules { true } t1() { sum = x + y } { true } t2() { sum2 = x*x + y*y } Now, we have this: { true } t1() ║ t2() { sum = x + y ⋀ sum2 = x*x + y*y } t1(): sum = 0 sum = sum + x sum = sum + y t2(): sum2 = 0 sum2 = sum2 + x*x sum2 = sum2 + y*y

21 Towards a parallel composition rule
Attempt 1: What’s wrong with this? If P1 and P2 work on the same variables, we have a problem For (x := x + 1)║(x := x + 1), we have: { x = 0 } x := x + 1 { x = 1 } But, not { x = 0 } (x := x + 1)║(x := x + 1) { x = 1 } This rule is unsound { φ1 } P1 { ψ1 } { φ2 } P2 { ψ2 } { φ1 ⋀ φ2 } P1 ║ P2 { ψ1 ⋀ ψ2 }

22 Towards a parallel composition rule
Attempt 2: What’s wrong with this? No way to prove some program The rule is incomplete { φ1 } P1 { ψ1 } { φ2 } P2 { ψ2 } { φ1 ⋀ φ2 } P1 ║ P2 { ψ1 ⋀ ψ2 } given than P1 and P2 don’t read and write the same variables

23 Towards a parallel composition rule
if (bal > 1000) cs := cs + 100 else cs := cs + 0 bal := bal Proof goal: { cs = 0 ⋀ bal = B } t1 ║ t2 { cs ≥ 100  bal ≥ 1000 ⋀ bal = B } We have already proved that { cs = 0 } t1 { cs ≥ 100  bal > 1000 } Easy to prove that { bal = B } t2 { bal = B } Now, the key point is that bal := bal does not interfere with the proof of { cs = 0 } t1 { cs ≥ 100  bal > 1000 }

24 The Owicki-Gries rule Main idea: We can compose proofs as long as they don’t interfere with each other What is interference? The read/written variable based definition is too strong Owicki and Gries: Two proofs don’t interfere if statements in each don’t affect the critical predicates in the other In a proof {φ} P {ψ}, a critical predicate is either ψ or a precondition φ_i of { φ_i } s_i { ψ _i } if for every critical formula φ from one proof and assignment or atomic rule from the other proof { φ_i’} s_i’ {ψ_i’}, we have { φ ⋀ φ_i’ } s_i’ { φ } { φ1 } P1 { ψ1 } { φ2 } P2 { ψ2 } { φ1 ⋀ φ2 } P1 ║ P2 { ψ1 ⋀ ψ2 }

25 Owicki-Gries rule: Example
if (bal > 1000) cs := cs + 100 else cs := cs + 0 bal := bal In our proof of { cs = 0 } t1 { cs ≥ 100  bal > 1000 }, assignment rules were {cs +100 ≥ 100 bal>1000} cs := cs+100 {cs ≥100  bal > 1000} { cs ≥ 100  bal > 1000} cs := cs + 0 { cs ≥ 100  bal > 1000 } For a proof of { bal = B } bal := bal { bal = B }, the assignement rules are { bal = B } bal := bal { bal = B }

26 OG rule: Example and remarks
In the previous proof, what would happen if bal := bal is replaced by bal := bal – 5000 The big questions: Is the OG rule sound? Can we prove only true things? Yes Is it (relatively) complete? Can we prove all true things? No!!!

27 OG incompleteness: Example
Prove {x = 0} x := x + 1 ║ x := x + 1 { x = 2 } Hard to end up with the value 2 for x For each statement, we don’t know what the post condition should be: we have no idea if the other statement is executed. Solution: Use auxiliary variables to encode such information

28 Auxiliary variables Replace the program with a similar program with additional variables We’ve encoded control-flow into data Now, we can talk about whether another statement has been executed { d1 = 0 ⋀ (d2 = 0 ⋀ x = 0) ⋁ (d2 = 1 ⋀ x = 1) } [ x := x + 1; d1 = 1 ] { d1 = 1 ⋀ (d2 = 0 ⋀ x = 1) ⋁ (d2 = 1 ⋀ x = 2) } [ x := x + 1; d1 = 1] [ x := x + 1; d2 = 1]

29 Auxiliary variables { d1 = 0 ⋀ (d2 = 0 ⋀ x = 0) ⋁ (d2 = 1 ⋀ x = 1) } [ x := x + 1; d1 = 1 ] { d1 = 1 ⋀ (d2 = 0 ⋀ x = 1) ⋁ (d2 = 1 ⋀ x = 2) } { d2 = 0 ⋀ (d1 = 0 ⋀ x = 0) ⋁ (d1 = 1 ⋀ x = 1) } [ x := x + 1; d2 = 1 ] { d2 = 1 ⋀ (d1 = 0 ⋀ x = 1) ⋁ (d1 = 1 ⋀ x = 2) } { d1 = 0 ⋀ d2 = 0 ⋀ x = 0 } [ x := x + 1; d1 = 1 ] ║ [ x := x + 1; d2 = 1 ] { x = 2 }

30 Auxiliary variables elimination rule
{ φ } P { ψ } { φ } erase(P, V) { ψ } You can erase a set of variables that do not appear in ψ and do not affect other variables in P. Main caveat of OG method: Coming up with auxiliary variables is hard and tedious and proofs blow up

31 Owicki-Gries method: Summary
Very simple rules for atomics Complicated rule for Non-interference Auxiliary variables for completeness In general, if you compose proofs containing m and n statements each, you are doing m*n additional checks

32 Modular Proofs: Rely-Guarantee

33 Rely-Guarantee proofs
Owicki-Gries proofs can become complex Auxiliary variables Quadratic number of non-interference checks Generally, not composable Rely-Guarantee overcomes some of these caveats Main idea: Instead of trying to write interference-free proofs, why not explicitly account for the allowed interference No additional interference checks required

34 OG problems OG proof gets more and more complex
if (bal > 1000) cs := cs + 100 else cs := cs + 0 bal := bal bal := bal bal := bal bal := 1.1 * bal bal := bal + 30 OG proof gets more and more complex The number of non-interference checks keeps growing Intuitively, all the statements in thread 2 are similar The non-interference is because of the same reason

35 Rely-Guarantee proofs
Two state predicates Relating initial and final states after a statement or a sequence of statements For example, x := x + 1 will be written as x’ = x + 1 Informally, C starting fromφ, relying on R and guaranteeing G, terminates with a state satisfying ψ We need φ and ψ to be stable w.r.t R C ⊨ ( φ , R, G, ψ ) If execution of C begins in a state satisfying φ and the other threads only execute statements that satisfy R, then on termination, the final state satisfies ψ. C only executes statements that satisfy G

36 Rely-Guarantee proofs
φ R ψ G C ⊨ ( φ , R, G, ψ )

37 Rely-Guarantee examples
Independent statements: the rule relies on no other thread changing the value of x x = x + 1 ⊨ (x = 0, x’ = x, x’ = x + 1, x = 1) Invariant preserving: the rule relies on no other thread decreasing the value of x if (bal > 1000) cs := 100 ⊨ (true, bal’ > bal ⋀ cs = cs’, true, cs = 100 bal > 1000)

38 Rely-Guarantee rules: Parallel composition
Rely of one thread becomes guarantee for the other And vice versa, P1 ⊨ (φ1, R1, G1, ψ1) G1  R2 P2 ⊨ (φ2, R2, G2, ψ2) G2  R1 P1║P2 ⊨ (φ1⋀ φ2, R1 ⋀ R2,G1 ⋁ G2, ψ1 ⋀ ψ2)

39 Rely-Guarantee: Atomic actions
For atomic actions, just copy things from Hoare triples { φ } C { ψ } atomic { C } ⊨ (φ, Pres(p) ⋀ Pres(q), φ → ψ’, ψ) The environment must preserve the pre- and post-condition The guarantee is the pre and post- condition of the current statement

40 Rely-Guarantee rules: Sequential composition
P1 ⊨ (φ1, R, G, ψ1) P2 ⊨ (φ2, R, G, ψ2) ψ1  φ2 P1;P2 ⊨ (φ1, R, G, ψ 2) No surprises here---if rely and guarantee predicates are the same, sequential composition is as in Hoare logic

41 Rely-Guarantee rules: Strengthening
φ’  φ R’  R G  G’ ψ ψ’ C ⊨ (φ, R, G, ψ) C ⊨ (φ’, R’, G’, ψ’)

42 The Balance example Prove: (bal:=bal+5000)║(if (bal>1000) cs:=100 else cs:=0) ⊨ (true, bal’=bal ⋀ cs’=cs, true, cs = 100  bal > 1000) bal := bal ⊨ (true, bal’ = bal, bal’ ≥ bal, true) T2 ⊨ (true, bal’ ≥ bal ⋀ cs’ = cs, true, cs = 100  bal > 1000)

43 Rely-Guarantee: Limitations
Rely-Guarantee reasoning forgets the order and number of actions in the environment Try proving (x:=x+1; x:=x+1)║(x:=x+1; x:=x+1) ⊨ (x = 0, x’ = x, true, x = 4) No way other than introducing auxiliary variables

44 Simplifying programs: Lipton’s Reduction

45 Reduction based methods
Early work by Lipton in the 70s Extended to a more general technique by Elmas, Qadeer, Tasiran, Sezgin and others starting 2009 A program simplification technique Rewrite a given program with a simpler program What is simpler? Usually, larger atomic sections

46 Reduction: The simplest example
Common programmer intuition: locks make everything inside atomic. Why? And if correct, how do we prove it lock(l); x = x + 1; unlock(l) Keep moving lock(l) to the right till it is next to x:=x + 1 Similarly, keep moving unlock(l) to the left lock(l) Is equivalent to lock(l)

47 Reduction: The simplest example
We have proved that the three statements can be considered atomic Lock acquire moved to the right and lock release moved to the left Cannot do it the other way. Why? Main idea: classify statements into those that move right and those that move left

48 Lipton’s reduction A statement  is a right mover if for every β from another thread, we have that ; β  β;  Secondary condition: If ; β fails an assertion, so should β;  Similar rules of left movers ; β β;  s1 error s1 error ; β β;  s1 s2 s1 s2 or β;  s1 error

49 Mover types examples Usually, lock acquisition statements are right movers and lock release statements are left movers Unrelated statements move across each other assume (bal > 1000) moves right across (bal := bal )

50 Reduction: Given a sequence of actions 1; 2; … ; n; β; γ1; … ; γn:
If i’s are right movers And γi’s are left movers We can replace the sequence by [ 1…nβ γ1…γn] GOAL: Make larger and larger atomic sections till the program becomes small enough to be reasoned about

51 Additional rules: Reduce if
If you prove a guard and the corresponding branch are atomic, then the whole if construct is atomic Simplified the program Proving this using Owicki-Gries is much simpler Way fewer non-interference checks [ if (bal > 1000) cs := cs + 100 else cs := cs + 0 ] if (*) [ assume (bal > 1000) cs := cs ] else [ assume (bal ≤ 1000) cs := cs + 0 ] if (bal > 1000) cs := cs + 100 else cs := cs + 0 bal := bal Left Left

52 A more complex example Right lock(x) l1 := x l1 := l1 + 1 x := l1 unlock(x) lock(x) l2 := x l2 := l2 + 1 x := l2 unlock(x) Right Both Both Left Left The locks and unlocks are clearly right and left movers How about l1 = l1 + 1 and l2 = l2 + 1? They are both left and right movers What about the rest? Mover checks fail We are missing some global information—e.g. when l1 := x is executed thread 1 holds the lock

53 Auxiliary Assertions Now, mover checks magically succeed
lock(x); o = 1 l1 := x; assert(o = 1) l1 := l1 + 1; assert(o = 1) x := l1; assert(o = 1) unlock(x); o = 0 o = 2; lock(x) assert(o = 2); l2 := x assert(o = 2); l2 := l2 + 1 assert(o = 2); x := l2 o = 0; unlock(x) Now, mover checks magically succeed Why? Assertions fail in both cases Added global information into each action Added history information into static actions

54 Auxiliary Assertions [ lock(x); o = 1 l1 := x; assert(o = 1) l1 := l1 + 1; assert(o = 1) x := l1; assert(o = 1) unlock(x); o = 0 ] [ o = 2; lock(x) assert(o = 2); l2 := x assert(o = 2); l2 := l2 + 1 assert(o = 2); x := l2 o = 0; unlock(x) ] Apply reduction rule: The threads execute atomically This is not the original program—we have additional assertions Discharge them using sequential reasoning

55 Compare to Owicki-Gries
We aren’t done yet; we just simplified the program Now, we apply Owicki-Gries or any other method and finish proving { x = 0 } t1║t2 { x = 2 }. Need 4 non- interference checks Compare to the original proof, before reduction { t2loc = 0  x = 0, t2loc = 5  x = 1 } lock() ; owner = 1 { t2loc = 0  x = 0, t2loc = 5  x = 1, owner = 1 } t := x { t2loc = 0  x = 0, t2loc = 5  x = 1, owner = 1, t = x } t := t + 1 { t2loc = 0  x = 0, t2loc = 5  x = 1, owner = 1, t = x + 1 } x := t { t2loc = 0  x = 1, t2loc = 5  x = 2, owner = 1} unlock(); t1loc = 5; owner = 0 { t2loc = 0  x = 1, t2loc = 5  x = 2, t1lock=5 } 30 non-interference checks + Auxiliary variables

56 Reduction + Abstraction
Sometimes, adding more behaviours to an action can help Does l1 := x move right? No, it conflicts with the CAS from t2 We abstract the statement by adding more possible behaviours Now, we have made the mover checks pass by relaxing the semantics of the program Is it worth doing this? Sometimes l1 = * s1 := CAS(x, l1, l1 + 1) l1 = x s1 := CAS(x, l1, l1 + 1) l2 = * s2 := CAS(x, l2, l2 + 1) l2 = x s2 := CAS(x, l2, l2 + 1)

57 Reduction + Abstraction
x := x + 5 x := 10 * x Mover checks fail both ways However, if the final proof only requires to check that x > 0, we replace both statements with [ if (x > 0) x = *; assume (x > 0); // Pick a positive x else x = *; // Pick any x ] Mover checks pass now

58 Reduction-based methods: Summary
Program simplification technique Not a full pre/post condition proof Merge actions to build larger and larger atomic sections Use abstraction wherever necessary Caveats: Hard to write the auxiliary assertions

59 Concurrency: Proof techniques
Owicki-Gries Auxiliary variables Many interference checks Rely-Guarantee Compositional Restrictive in some settings Lipton’s reduction Program simplification Atomicity Auxiliary assertions

60 Next week… Systematic testing
Sequentialization and Bounded model checking


Download ppt "Verification of Concurrent Programs"

Similar presentations


Ads by Google