On Statistical Model Checking of Stochastic Systems

Slides:



Advertisements
Similar presentations
Probabilistic Verification of Discrete Event Systems using Acceptance Sampling Håkan L. S. YounesReid G. Simmons Carnegie Mellon University.
Advertisements

CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Statistical Probabilistic Model Checking Håkan L. S. Younes Carnegie Mellon University.
Markov Chains 1.
Ymer: A Statistical Model Checker Håkan L. S. Younes Carnegie Mellon University.
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes Reid G. Simmons (initial work performed at HTC, Summer 2001)
Planning under Uncertainty
1/55 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 10 Hypothesis Testing.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 8-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter.
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Basic Business Statistics.
Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 7 th Edition Chapter 9 Hypothesis Testing: Single.
VESTA: A Statistical Model- checker and Analyzer for Probabilistic Systems Authors: Koushik Sen Mahesh Viswanathan Gul Agha University of Illinois at Urbana-Champaign.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Chapter 10 Hypothesis Testing
Confidence Intervals and Hypothesis Testing - II
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
Fundamentals of Hypothesis Testing: One-Sample Tests
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap th Lesson Introduction to Hypothesis Testing.
1 9/8/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Planning and Verification for Stochastic Processes with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University.
1 9/23/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Chapter 10 Hypothesis Testing
1 Probabilistic Model Checking of Systems with a Large State Space: A Stratified Approach Shou-pon Lin Advisor: Nicholas F. Maxemchuk Department of Electrical.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Fundamentals of Hypothesis Testing: One-Sample Tests Statistics.
Chap 8-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 8 Introduction to Hypothesis.
Chap 8-1 Fundamentals of Hypothesis Testing: One-Sample Tests.
Probabilistic Verification of Discrete Event Systems using Acceptance Sampling Håkan L. S. Younes Carnegie Mellon University.
Simulation in Healthcare Ozcan: Chapter 15 ISE 491 Fall 2009 Dr. Burtner.
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Markov Decision Process (MDP)
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
3/7/20161 Now it’s time to look at… Discrete Probability.
Randomized Algorithms for Distributed Agreement Problems Peter Robinson.
Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 7 th Edition Chapter 9 Hypothesis Testing: Single.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Virtual University of Pakistan
Now it’s time to look at…
Statistical probabilistic model checking
Chapter 5 STATISTICAL INFERENCE: ESTIMATION AND HYPOTHESES TESTING
SS 2017 Software Verification Probabilistic modelling – DTMC / MDP
STAT 311 Chapter 1 - Overview and Descriptive Statistics
Prof. Dr. Holger Schlingloff 1,2 Dr. Esteban Pavese 1
Lebesgue measure: Lebesgue measure m0 is a measure on i.e., 1. 2.
Conditional Probability
Path Coupling And Approximate Counting
When we free ourselves of desire,
CONCEPTS OF HYPOTHESIS TESTING
Now it’s time to look at…
STATISTICAL INFERENCE PART IV
Econometric Models The most basic econometric model consists of a relationship between two variables which is disturbed by a random error. We need to use.
Discrete Event Simulation - 4
Lecture 2 – Monte Carlo method in finance
Now it’s time to look at…
Statistical Model-Checking of “Black-Box” Probabilistic Systems VESTA
Hypothesis tests for the difference between two proportions
One-Way Analysis of Variance
Computer Security: Art and Science, 2nd Edition
Now it’s time to look at…
Linear Time Properties
Translating Linear Temporal Logic into Büchi Automata
Applied Discrete Mathematics Week 12: Discrete Probability
Statistical Probabilistic Model Checking
Now it’s time to look at…
Programming with data Lecture 3
‘Crowds’ through a PRISM
Chapter 9 Hypothesis Testing: Single Population
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Presentation transcript:

On Statistical Model Checking of Stochastic Systems Koushik Sen Mahesh Viswanathan Gul Agha University of Illinois at Urbana-Champaign

Problem Given a probabilistic model M (e.g. Markov Chains) Given a CSL formula (with unbounded until)  = P<p[1 U 2] Probability that a path satisfies 1 until 2 is less than p Can we say M ²  using statistical model-checking?

YES with some assumptions Solution Given a probabilistic model M (e.g. Markov Chains) Given a CSL formula (with unbounded until)  = P<p[1 U 2] Probability that a path satisfies 1 until 2 is less than p Can we say M ²  using statistical model-checking? Using Monte Carlo simulation of “finite paths” Using a sequence of inter-related statistical hypothesis testing YES with some assumptions

Model Assumption Sample execution paths can be generated through discrete-event simulation Execution paths are sequences of the form  = s0 ! s1 ! s2 ! … where each si is a state of the model and ti 2 R>0 is the time spent in the state si before moving to the state si+1 A probability space can be defined on the execution paths of the model in such a way that the paths satisfying any path formula in our concerned logic (CSL or PCTL), is measurable The number of states of the system is finite t0 t1 t2

Semi Markov Chains (Simple Model) Semi Markov Chains (S,sI,P,Q,L) S – finite number of states (let |S| = N) sI – initial state P : S £ S ! [0,1] – transition probability matrix Q : S £ S ! (R¸ 0 ! [0,1]) – continuous cumulative probability distribution function L : S ! 2AP – labeling function, where AP is the set of atomic propositions P(s,s’) gives the probability of transition from s to s’ Q(s,s’) gives the distribution over time for which a state remains in state s before moving to state s’ Examples: network protocols with quantified non-determinism or randomized algorithms

Continuous Stochastic Logic (CSL)  ::= true | a |  Æ  | :  | PQ p()  ::=  U<t  |  U  | X  where Q 2 {<,>,¸,·} P< 0.5(§ full) Probability that queue becomes full is less than 0.5 P>0.98(: retransmit U receive) Probability that a message is eventually received successfully without any need for retransmission is greater than 0.98

Goal Model check properties in CSL against SMC models Main Contribution: Statistically model-check formulas of the form P<p[1 U 2] against SMC boils down to model-checking the formula against the underlying Markov Chain

Relevant Part of Model and Logic Markov Chain (S,sI,P,L) S – finite number of states (let |S| = N) sI – initial state P : S £ S ! [0,1] – transition probability matrix L : S ! 2AP – labeling function, where AP is the set of atomic propositions Unbounded Until in CSL P<p[1 U 2]

P<p[true U ERR] (i.e. P<p[§ ERR]) Example 1 sI s1 s2 ERR OK q r 1-q 1-r 1 P<p[true U ERR] (i.e. P<p[§ ERR])

Bounded Until (Checking s ² P<p[§<t a]) Given a simple Semi Markov Chain M paths in this model are infinite Want to check if s ² P<p[§<t a] a being an atomic proposition Given , , and 1 (type I, type II error, and indifference region) ,  is the probability that our statistical algorithm gives a wrong answer

Checking s ² P<p[§<t a] Sample n paths from s Each path is of the form  = s0 ! s1 ! s2 ! … ! sn Sample a path until t0+t1+…+tn > t or a is satisfied let f path satisfied §<ta let y = f/n ……. t0 t1 tn §<t a p Observation y 1

Bounded Until (Checking s ² P<p[§<t a]) n is computed such that the following holds Pr[Y/n < p |  ¸ p+1] ·  Pr[Y/n ¸ p |  · p-1] ·  where Y ~ Binomial(n,)

Unbounded Until Given a simple Markov Chain M assume paths in this model are infinite Want to check if s ² P<p[§ a] a being an atomic proposition Sample n paths from s what is the length of each path to be sampled?

Unbounded Until Given a simple Markov Chain M assume paths in this model are infinite Want to check if s ² P<p[§ a] a being an atomic proposition Sample n paths from s what is the length of each path to be sampled? Simple Strategy: Sample a path till we encounter a state satisfying “a” what happens if there is a path whose any extension does not have a state satisfying “a”? non-termination

Simple Example of Non-termination q a : a 1-q 1 : a 1 A sample path takes me to this state: will never encounter a state satisfying “a”

Solution q a : a 1-q 1 : a 1 Use stopping probability of ps (user supplied) at every state: at any state stop sampling with probability ps

Modified Model ps : a ps q(1-ps) 1 a : a ps (1-ps)(1-q) 1-ps : a 1-ps Theorem: If a path from any state s 2 S in the model M satisfies 1 U 2 with some probability, p, then a path sampled from the same state in the modified model M’ will satisfy the same formula with probability at least p(1−ps)N-1qN-1, where N = |S| and q is the smallest non-zero transition probability in the model M.

Modified Model ps : a ps q(1-ps) ps a : a Observation 1: Introduce stopping probability ps to sample finite paths ps (1-ps)(1-q) 1-ps : a 1-ps Theorem: If a path from any state s 2 S in the model M satisfies 1 U 2 with some probability, p, then a path sampled from the same state in the modified model M’ will satisfy the same formula with probability at least p(1−ps)N-1qN-1, where N = |S| and q is the smallest non-zero transition probability in the model M.

Not There Yet (in checking s ² P<p[§a] ) Sample n paths from s Each path is of the form  = s0 ! s1 ! s2 ! … ! sn Sample a path until we stop let f paths satisfy §a and y = f/n Note that we can determine if a finite path satisfies § a We cannot determine if a finite path satisfies : : (§ a) t0 t1 tn ……. ? ? ? § a p Observation y 1

Solution (for checking s ² P<p[§ a]) Use ideas from numerical model checking technique Strue = {s 2 S | s ² a} Sfalse = {s 2 S | no path from s satisfies § a} S? = S - Strue – Sfalse Theorem: Probability of reaching a state in Strue or Sfalse is 1

Solution (in checking s ² P<p[§a] ) Sample n paths from s Each path is of the form  = s0 ! s1 ! s2 ! … ! sn Sample a path until we reach a state in Strue or Sfalse let f paths satisfied §a let y = f/n ……. t0 t1 tn §a p Observation y 1

Solution (in checking s ² P<p[§a] ) Sample n paths from s Each path is of the form  = s0 ! s1 ! s2 ! … ! sn Sample a path until we reach a state in Strue or Sfalse let f path satisfied §a let y = f/n ……. How to check if a state belongs to Sfalse or s ² P=0[§ a] ? t0 t1 tn §a p Observation y 1

Simple Situation (Coin Toss) Given a biased coin P[head] = p (unknown) P[tail] = 1-p Want to check if P[head] = 0 (i.e. p =0)

Simple Situation (Coin Toss) Given a biased coin P[head] = p (unknown) P[tail] = 1-p Want to check if P[head] = 0 (i.e. p =0) toss the coin n times suppose all the outcomes are tail (i.e. y = x1 + … + xn / n = 0) Can we say that P[head] = 0?

Simple Situation (Coin Toss) Given a biased coin P[head] = p (unknown) P[tail] = 1-p Want to check if P[head] = 0 (i.e. p =0) toss the coin n times suppose all the outcomes are tail (i.e. y = x1 + … + xn / n = 0) Can we say that P[head] = 0? Yes Provided the error in our decision is bounded by a respectable small number (say,  =  = 0.01) Type I error = P[Y· y | p > 0] · , where Y ~ Binomial(n,p)

Simple Situation (Coin Toss) Given a biased coin P[head] = p (unknown) P[tail] = 1-p Want to check if P[head] = 0 (i.e. p =0) toss the coin n times suppose all the outcomes are tail (i.e. y = x1 + … + xn / n = 0) Can we say that P[head] = 0? Yes Provided the error in our decision is bounded by a respectable small number (say,  =  = 0.01) Type I error = P[Y· y | p > 0] · , where Y ~ Binomial(n,p) Problem: cannot compute Type I error (cannot bound P[Y=0], where Y~Binomial(n,p) and p>0)

Simple Situation (Coin Toss) Given a biased coin P[head] = p (unknown) P[tail] = 1-p Want to check if P[head] = 0 (i.e. p =0) toss the coin n times suppose all the outcomes are tail (i.e. y = x1 + … + xn / n = 0) Can we say that P[head] = 0? Yes Provided the error in our decision is bounded by a respectable small number (say,  =  = 0.01) Type I error = P[Y· y | p > 0] · , where Y ~ Binomial(n,p) Problem: cannot compute Type I error (cannot bound P[Y=0], where Y~Binomial(n,p) and p>0) Solution: can bound P[Y=0], if Y~Binomial(n,p) and p¸  assume p does not lie in the range (0,), where 0 <  < 1 type I error = P[Y· y | p ¸ ] · P[Y=0 | p = ]

Simple Situation (Coin Toss) Therefore, given  and , compute n such that P[Y=0] · , where Y~Binomial(n,). Compute n samples x1, x2, … xn Say, P[head] = 0 if x1+… + xn/n = 0 Else, say P[head] > 0 Note: type II error = P[Y>0 | p =0] = 0 <  Nothing to worry

Simple Situation (Coin Toss) Therefore, given  and , compute n such that P[Y=0] · , where Y~Binomial(n,). Compute n samples x1, x2, … xn Say, P[head] = 0 if x1+… + xn/n = 0 Else, say P[head] > 0 Note: type II error = P[Y>0 | p =0] = 0 <  Nothing to worry Observation 2: Introduce  and assume that p does not lie in the range (0,)

Sub-task: check if s 2 Sfalse i.e. s ² P=0[§ a] Use Observation 1 and Observation 2 assume that Pr[§ a] in M’ does not lie in the range (0,2), where 2 is provided as input to the model-checker

check if s 2 Sfalse i.e. s ² P=0 [§a] ) Sample n paths from s Each path is of the form  = s0 ! s1 ! s2 ! … ! sn Sample a path until we stop say s 2 P=0[§ a] if at least one path satisfies § a if none of the paths satisfy § a, then say s ² P=0[§ a] t0 t1 tn ……. ? ? ? ? ? § a p=0 Observation 1

Comparison between P<p[§ a] and P=0[§ a] Observation y 1 p=0 Observation 1

Model-checking Other Operators Essentially same as statistical model-checking techniques proposed in [Younes and Simmons CAV’02] and [Sen, Viswanathan, Agha CAV’04]

Main Result Summarized Our algorithm A takes as input a stochastic model M, a formula  in CSL, error bounds  and , and three other parameters 1, 2, and ps. The result of model checking is denoted by A1,2,ps (M, ,,) can be either true or false.

Main Result Summarized Theorem: If the model M satisfies the following conditions C1: For every subformula of the form P¸ p in the formula  and for every state s in M, the probability that a path from s satisfies  must not lie in the range [ (p-1-)/(1-),(p+1)/(1-)] C2: For any subformula of the form 1 U 2 and for every state s in M, the probability that a path from s satisfies 1 U 2 must not lie in the range (0, 2/((1-ps)N-1qN-1)], where N is the number of states in the model M and q is the smallest non-zero transition probability in M Then the algorithm provides the following guarantees R1 : Pr[A1,2,ps (M, ,,) = true | M 2 ] ·  Pr[A1,2,ps (M, ,,) = false | M² ] · 

Optimizations Caching of results Discount Optimization checking s 2 Sfalse is expensive do not check s 2 Sfalse for every state in the path check if a state s 2 Sfalse with probability pd

Conclusion Interesting idea showing that unbounded until can be model-checked statistically given certain assumptions about the model holds Statistical model-checking has limitations in general If we have to choose 1, 2, and ps small, then running time can be considerably high However, if values of 1, 2, and ps are reasonable then running time is fast Running time increases if we want to get better error bounds (,) Running time increases if time bound in bounded until is large There is always a model for which the approach does not work for both bounded and unbounded until Advantages: No need to store states: sample as required Estimate probability (see FCS’05, QAPL’05, QEST’05) using Vesta tool.