Presentation is loading. Please wait.

Presentation is loading. Please wait.

Planning and Verification for Stochastic Processes with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University.

Similar presentations


Presentation on theme: "Planning and Verification for Stochastic Processes with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University."— Presentation transcript:

1 Planning and Verification for Stochastic Processes with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University

2 2 Introduction Asynchronous processes are abundant in the real world Telephone system, computer network, etc. Discrete-time models are inappropriate for systems with asynchronous events We need to go beyond semi-Markov models!

3 3 Two Problems Model Checking Given a model of an asynchronous system, check if some property holds Planning In the presence of asynchronous events, find a control policy that satisfies specified objectives My thesis research: Provide solution to both problems

4 4 Illustrative Example: Tandem Queuing Network q1q1 q2q2 arriveroutedepart q 1 = 0 q 2 = 0 q 1 = 1 q 2 = 0 q 1 = 1 q 2 = 1 q 1 = 2 q 2 = 0 q 1 = 1 q 2 = 0 t = 0t = 1.2t = 3.7t = 3.9t = 5.5 With both queues empty, is the probability less than 0.5 that both queues become full within 5 seconds? q 1 = 1 q 2 = 0 q 1 = 2 q 2 = 0 q 1 = 1 q 2 = 1 q 1 = 1 q 2 = 0

5 5 Probabilistic Model Checking Given a model M, a state s, and a property , does  hold in s for M ? Model: stochastic discrete event system Property: probabilistic temporal logic formula

6 6 Probabilistic Model Checking: Example With both queues empty, is the probability less than 0.5 that both queues become full within 5 seconds? State: q 1 = 0  q 2 = 0 Property (CSL): P <0.5 (true U ≤5 q 1 = 2  q 2 = 2)

7 7 Statistical Solution Method [Younes & Simmons, CAV-02] Use discrete event simulation to generate sample paths Use acceptance sampling to verify probabilistic properties Hypothesis: P ≥  (  ) Observation: verify  over a sample path Not estimation!

8 8 Why Statistical Approach? Benefits Insensitive to size of system Easy to trade accuracy for speed Easy to parallelize Alternative: Numerical approach Memory intensive Limited to certain classes of systems

9 9 Tandem Queuing Network: Results [Younes et al., TACAS-04] Verification time (seconds) Size of state space 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 910 10 11 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 10 5 10 6  P ≥0.5 (true U ≤T full)  = 10 −6  =  = 10 −2  = 0.5·10 −2

10 10 Tandem Queuing Network: Distributed Sampling Use multiple machines to generate observations m 1 : Pentium IV 3GHz m 2 : Pentium III 733MHz m 3 : Pentium III 500MHz % observations m 1 only n m1m1 m2m2 m3m3 timem1m1 m2m2 637020100.4671290.500.58 20476026141.2870301.461.93 6553565211426.29673333.8944.85

11 11 Planning with Asynchronous Events Goal oriented planning Satisfy goal condition, expressed in CSL Local search with policy repair [Younes & Simmons, ICAPS-03, ICAPS-04] Decision theoretic planning Maximize reward Generalized semi-Markov decision process (GSMDP) [Younes & Simmons, AAAI-04]

12 12 Generate, Test and Debug [Simmons, AAAI-88] Generate initial policy Test if policy is good Debug and repair policy good badrepeat

13 13 Policy Generation Probabilistic planning problem Policy (decision tree) Eliminate uncertainty Solve using temporal planner (e.g. VHPOP [Younes & Simmons, JAIR 20] ) Generate training data by simulating plan Decision tree learning Deterministic planning problem Temporal plan State-action pairs

14 14 Policy Debugging Sample execution paths Revised policy Sample path analysis Solve deterministic planning problem taking failure scenario into account Generate training data by simulating plan Incremental decision tree learning [Utgoff et al., MLJ 29] Failure scenarios Temporal plan State-action pairs

15 15 A Model of Stochastic Discrete Event Systems Generalized semi-Markov process (GSMP) [Matthes 1962] A set of events E A set of states S GSMDP Actions A  E are controllable events Policy: mapping from execution histories to sets of actions

16 16 Events With each event e is associated: A condition  e identifying the set of states in which e is enabled A distribution G e governing the time e must remain enabled before it triggers A distribution p e (s′|s) determining the probability that the next state is s′ if e triggers in state s

17 17 Events: Example Three events: e 1, e 2, e 3  1 = a,  2 = b,  3 = c G 1 = Exp(1), G 2 = D(2), G 3 = U(0,1) a  b  ca  b  c a  b  ca  b  c G 1 = Exp(1) G 2 = D(2) G 3 = U(0,1) Asynchronous events  beyond semi-Markov a  b  ca  b  c G 2 = D(1.4)G 3 = U(0,1) t = 0t = 0.6t = 2 e1e1 e2e2

18 18 Rewards and Optimality Lump sum reward k(s,e,s′) associated with transition from s to s′ caused by e Continuous reward rate r(s,A) associated with A being enabled in s Infinite-horizon discounted reward Unit reward earned at time t counts as e –  t Optimal choice may depend on entire execution history

19 19 GSMDP Solution Method Continuous-time MDPGSMDPDiscrete-time MDP Phase-type distributions (approximation) Uniformization [Jensen 1953] GSMDPContinuous-time MDP MDP policyGSMDP policy Simulate phase transitions

20 20 Continuous Phase-Type Distributions [Neuts 1981] Time to absorption in a continuous-time Markov chain with n transient states 0 Exponential 10 p 1 (1 – p) 1 2 Two-phase Coxian n – 110 … p (1 – p) n -phase generalized Erlang

21 21 Method of Moments Approximate general distribution G with phase-type distribution PH by matching the first k moments Mean (first moment):  1 Variance:  2 =  2 –  1 2 The i th moment:  i = E[X i ]

22 22 System Administration Network of n machines Reward rate c(s) = k in states where k machines are up One crash event and one reboot action per machine At most one action enabled at any time (single agent)

23 23 System Administration: Performance Reward 50 45 40 35 30 25 20 15 12345678910111213 n Reboot-time distribution: U(0,1)

24 24 System Administration: Performance 1 moment2 moments3 moments sizestatestime (s)statestime (s)statestime (s) 4160.36323.5711210.30 5320.82807.7227222.33 6641.8919216.2464040.98 71283.6544828.04147269.06 82566.98102448.113328114.63 951216.04230480.277424176.93 10102433.585120136.416384291.70 11204866.0024576264.1735840481.10 124096111.9653248646.97778241051.33 138192210.031146882588.951679363238.16 2n2n (n+1)2 n (1.5n+1)2 n

25 25 Summary Asynchronous processes exist in the real world The complexity is manageable Model checking: statistical hypothesis testing and simulation Planning: sample path analysis and phase- type distributions

26 26 Challenges Ahead Evaluation of policy repair techniques Complexity results for optimal GSMDP planning (undecidable?) Discrete phase-type distributions Handles deterministic distributions

27 27 Tools Ymer Statistical probabilistic model checking Tempastic-DTP Decision theoretic planning with asynchronous events


Download ppt "Planning and Verification for Stochastic Processes with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University."

Similar presentations


Ads by Google