Presentation is loading. Please wait.

Presentation is loading. Please wait.

Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook Joint work with Scott A. Smolka.

Similar presentations


Presentation on theme: "Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook Joint work with Scott A. Smolka."— Presentation transcript:

1 Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook Joint work with Scott A. Smolka

2 Talk Outline 1.Model Checking 2.Randomized Algorithms 3.LTL Model Checking 4.Probability Theory Primer 5.Monte Carlo Model Checking 6.Implementation & Results 7.Conclusions & Open Problem

3 Model Checking ? Is system S a model of formula φ?

4 Model Checking S is a nondeterministic/concurrent system.  is a temporal logic formula. –in our case Linear Temporal Logic (LTL). Basic idea: intelligently explore S ’s state space in attempt to establish S ⊨ .

5 diameter computation tree Size of S’s state transition graph is O(2 |s| )! Model Checking’s Fly in the Ointment: State Explosion Symbolic MC (OBDDs) Symmetry Reduction Partial Order Reduction Abstraction Refinement Bounded Model Checking

6 recurrence diameter computation tree Monte Carlo: N( ,  ) independent samples Error margin  and confidence ratio  Monte Carlo Approach LTL

7 Randomized Algorithms Huge impact on CS: (distributed) algorithms, complexity theory, cryptography, etc. Takes of next step algorithm may depend on random choice (coin flip). Benefits of randomization include simplicity, efficiency, and symmetry breaking.

8 Randomized Algorithms Monte Carlo: may produce incorrect result but with bounded error probability. –Example: Rabin’s primality testing algorithm Las Vegas: always gives correct result but running time is a random variable. –Example: Randomized Quick Sort

9 Linear Temporal Logic An LTL formula is made up of atomic propositions p, boolean connectives , ,  and temporal modalities X (neXt) and U (Until). Safety: “nothing bad ever happens” E.g. G(  (pc 1 =cs  pc 2 =cs)) where G is a derived modality (Globally). Liveness: “something good eventually happens” E.g. G( req  F serviced ) where F is a derived modality (Finally).

10 LTL Model Checking Every LTL formula  can be translated to a Büchi automaton B  whose language is the set of infinite words satisfying . Automata-theoretic approach: S ⊨  iff L ( B S )  L ( B  ) iff L ( B S  B  )  

11 Emptiness Checking Checking non-emptiness is equivalent to finding an accepting cycle reachable from initial state (lasso). Double Depth-First Search (DDFS) algorithm can be used to search for such cycles, and this can be done on-the-fly! s1s1 s2s2 s3s3 sksk s k-2 s k-1 s k+1 s k+2 s k+3 snsn DFS 2 DFS 1

12 Bernoulli Random Variable (coin flip) Value of Bernoulli RV Z: Z = 1 (success) & Z = 0 (failure) Probability mass function: p(1) = Pr[Z=1] = p z p(0) = Pr[Z=0] = 1- p z = q z Expectation: E[Z] = p z

13 Geometric Random Variable Value of geometric RV X with parameter p z : no. independent trials until success. Probability mass function: p(N) = Pr[X = N] = q z N-1 p z Cumulative Distribution Function: F(N) = Pr[X  N] = ∑ i  N p(i) = 1 - q z N

14 How Many Trials? Requiring Pr[X  N]  1- δ yields : N  ln (δ) / ln (1- p z ) Lower bound on number of trials N needed to achieve success with confidence ratio δ.

15 What If p z Unknown? Requiring Pr[X  N]  1- δ and p z  ε yields : N  ln (δ) / ln (1- ε)  ln (δ) / ln (1- p z ) Lower bound on number of trials N needed to achieve success with confidence ratio δ and error margin ε.

16 Statistical Hypothesis Testing Example: Given a fair and a biased coin. –Null hypothesis H 0 - fair coin selected. –Alternative hypothesis H 1 - biased coin selected. Hypothesis testing: Perform N trials. –If number of heads is LOW, reject H 0. –Else fail to reject H 0.

17 Statistical Hypothesis Testing H 0 is TrueH 0 is False reject H 0 Type I error w/prob. α Correct to reject H 0 fail to reject H 0 Correct to fail to reject H 0 Type II error w/prob. β

18 Hypothesis Testing – Our Case Null hypothesis H 0 : p z  ε Alternative hypothesis H 1 : p z < ε If no success after N trials, then reject H 0 Type I error: α = Pr[ X > N | H 0 ]  δ

19 Monte Carlo Model Checking Sample Space: lassos in B S  B  Bernoulli random variable Z : –Outcome = 1 if randomly chosen lasso accepting –Outcome = 0 otherwise p Z = ∑ p i Z i (expectation of an accepting lasso) where p i is lasso prob. (uniform random walk)

20 Lassos Probability Space L 1 = 11 L 2 = 1244 L 3 = 1231 L 4 = 12344 Pr[L 1 ]= ½ Pr[L 2 ]= ¼ Pr[L 3 ]= ⅛ Pr[L 4 ]= ⅛ q Z = L 1 + L 2 = ¾ p Z = L 3 + L 4 = ¼ 12 3 4

21 Monte Carlo Model Checking (MC 2 ) input: B=(Σ,Q,Q 0,δ,F), ε, δ N = ln (δ) / ln (1- ε) for (i = 1; i  N; i++) if (RL(B) == 1) return (1, error-trace); return (0, “reject H 0 with α = Pr[ X > N | H 0 ] < δ”); where RL(B) performs a uniform random walk through B (storing states encountered in hash table) to obtain a random sample (lasso).

22 Random Lasso (RL) Algorithm

23 Monte Carlo Model Checking Theorem: Given a Büchi automaton B, error margin ε, and confidence ratio δ, if MC 2 fails to find a counter-example, then Pr[ X > N | H 0 ]  δ where N = ln(δ) / ln(1- ε).

24 Monte Carlo Model Checking Theorem: Given a Büchi automaton B having diameter D, error margin ε, and confidence ratio δ, MC 2 runs in time O(N∙D) and uses space O(D), where N = ln(δ) / ln(1- ε). Cf. DDFS which runs in O(2 |S|+|φ| ) time for B = B S  B .

25 Implementation Implemented DDFS and MC 2 in jMocha model checker for synchronous systems specified using Reactive Modules. Performance and scalability of MC 2 compares very favorably to DDFS.

26 (Deadlock freedom) DPh: Symmetric Unfair Version

27 (Starvation freedom) DPh: Symmetric Unfair Version

28 DPh: Asymmetric Fair Version (Deadlock freedom) δ = 10 -1 ε = 1.8*10 -4 N = 1257

29 DPh: Asymmetric Fair Version (Starvation freedom) δ = 10 -1 ε = 1.8*10 -4 N = 1257

30 Alternative Sampling Strategies 01 n n-1 Multilasso sampling: ignores backedges that do not lead to an accepting lasso. Pr[L n ]= O(2 -n ) Probabilistic systems: there is a natural way to assign a probability to a RL. Input partitioning: partition input into classes that trigger the same behavior (guards).

31 Related Work Heimdahl et al.’s Lurch debugger. Mihail & Papadimitriou (and others) use random walks to sample system state space. Herault et al. use bounded model checking to compute an (ε,δ)-approx. for “positive LTL”. Probabilistic Model Checking of Markov Chains: ETMCC, PRISM, PIOAtool, and others.

32 Conclusions MC 2 is first randomized, Monte Carlo algorithm for the classical problem of temporal-logic model checking. Future Work: Use BDDs to improve run time. Also, take samples in parallel! Open Problem: Branching-Time Temporal Logic (e.g. CTL, modal mu-calculus).


Download ppt "Monte Carlo Model Checking Radu Grosu SUNY at Stony Brook Joint work with Scott A. Smolka."

Similar presentations


Ads by Google