Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)

Similar presentations


Presentation on theme: "CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)"— Presentation transcript:

1

2 CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)

3 Classical Planning What action next? PerceptsActions Environment Static Fully Observable Perfect Predictable Discrete Deterministic

4 Stochastic Planning (MDPs, Reinforcement Learning) What action next? PerceptsActions Environment Static Fully Observable Perfect Stochastic Unpredictable Discrete

5 Partially-Observable MDPs What action next? PerceptsActions Environment Static Partially Observable Noisy Stochastic Unpredictable Discrete

6 Markov Decision Process (MDP)  S : set of states  A : set of actions  P r(s’|s,a): transition model  R (s,a,s’): reward model   : discount factor  s 0 : start state

7 Objective of a Fully Observable MDP  Find a policy  : S → A  which maximizes expected discounted reward given an infinite horizon assuming full observability

8 Partially-Observable MDP  S : set of states  A : set of actions  Pr (s’|s,a): transition model  R (s,a,s’): reward model   : discount factor  s 0 : start state  E set of possible evidence (observations)  Pr (e|s)

9 Objective of a POMDP  Find a policy  : BeliefStates( S) → A  A belief state is a probability distribution over states  which maximizes expected discounted reward given an infinite horizon assuming full observability

10 99 Classical Planning hellheaven World deterministic State observable Sequential Plan Reward 100-100

11  10 MDP-Style Planning hellheaven World stochastic State observable Policy

12  11 Stochastic, Partially Observable sign hell?heaven?

13  12 Belief State sign 50%  State of agent’s mind  Not just of world  Note: Distribution: sum of probabilities = 1

14 Planning in Belief Space sign 50% sign 50% sign 50% sign 50% N W E Exp. Reward: 0 For now, assume movement is deterministic

15 Partially-Observable MDP  S : set of states  A : set of actions  Pr (s’|s,a): transition model  R (s,a,s’): reward model   : discount factor  s 0 : start state  E set of possible evidence (observations)  Pr (e|s)

16 Evidence Model  S = {s wb, s eb, s wm, s em s wul, s eul s wur, s eur }  E= {heat}  Pr(e|s): Pr(heat | s eb ) = 1.0 Pr(heat | s wb ) = 0.2 Pr(heat | s other ) = 0.0 sign s eb s wb

17 Planning in Belief Space sign 50% sign 100% 0% S sign 17% 83%  heat heat Pr(heat | s eb ) = 1.0 Pr(heat | s wb ) = 0.2

18 17 POMDPs In POMDPs we apply the very same idea as in MDPs. Since the state is not observable, the agent has to make its decisions based on the belief state which is a posterior distribution over states. Let b be the belief of the agent about the state under consideration. POMDPs compute a value function over belief space: γ

19 18 Problems Each belief is a probability distribution, thus, each value in a POMDP is a function of an entire probability distribution. This is problematic, since probability distributions are continuous. How many belief states are there? For finite worlds with finite state, action, and measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions.

20 19 An Illustrative Example measurementsaction u 3 state x 2 payoff measurements actions u 1, u 2 payoff state x 1

21 20 The Parameters of the Example The actions u 1 and u 2 are terminal actions. The action u 3 is a sensing action that potentially leads to a state transition. The horizon is finite and =1.

22 21 Payoff in POMDPs In MDPs, the payoff (or return) depended on the state of the system. In POMDPs, however, the true state is not exactly known. Therefore, we compute the expected payoff by integrating over all states:

23 22 Payoffs in Our Example (1) If we are totally certain that we are in state x 1 and execute action u 1, we receive a reward of -100 If, on the other hand, we definitely know that we are in x 2 and execute u 1, the reward is +100. In between it is the linear combination of the extreme values weighted by the probabilities

24 23 Payoffs in Our Example (2)

25 24 The Resulting Policy for T=1 Given we have a finite POMDP with T=1, we would use V 1 (b) to determine the optimal policy. In our example, the optimal policy for T=1 is This is the upper thick graph in the diagram.

26 25 Piecewise Linearity, Convexity The resulting value function V 1 (b) is the maximum of the three functions at each point It is piecewise linear and convex.

27 26 Pruning If we carefully consider V 1 (b), we see that only the first two components contribute. The third component can therefore safely be pruned away from V 1 (b).

28 27 Payoffs in Our Example (2)

29 28 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. V 1 (b)

30 29 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. Suppose the robot perceives z 1 for which p(z 1 | x 1 )=0.7 and p(z 1 | x 2 )=0.3. Given the observation z 1 we update the belief using Bayes rule.

31 30 Value Function b’(b|z 1 ) V 1 (b) V 1 (b|z 1 )

32 31 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. Suppose the robot perceives z 1 for which p(z 1 | x 1 )=0.7 and p(z 1 | x 2 )=0.3. Given the observation z 1 we update the belief using Bayes rule. Thus V 1 (b | z 1 ) is given by

33 32 Expected Value after Measuring Since we do not know in advance what the next measurement will be, we have to compute the expected belief

34 33 Expected Value after Measuring Since we do not know in advance what the next measurement will be, we have to compute the expected belief

35 34 Resulting Value Function The four possible combinations yield the following function which then can be simplified and pruned.

36 35 Value Function b’(b|z 1 ) p(z 1 ) V 1 (b|z 1 ) p(z 2 ) V 2 (b|z 2 ) \bar{V} 1 (b)

37 36 State Transitions (Prediction) When the agent selects u 3 its state potentially changes. When computing the value function, we have to take these potential state changes into account.

38 37 Resulting Value Function after executing u 3 Taking the state transitions into account, we finally obtain.

39 38 Value Function after executing u 3 \bar{V} 1 (b) \bar{V} 1 (b|u 3 )

40 39 Value Function for T=2 Taking into account that the agent can either directly perform u 1 or u 2 or first u 3 and then u 1 or u 2, we obtain (after pruning)

41 40 Graphical Representation of V 2 (b) u 1 optimal u 2 optimal unclear outcome of measuring is important here

42 41 Deep Horizons and Pruning We have now completed a full backup in belief space. This process can be applied recursively. The value functions for T=10 and T=20 are

43 42 Deep Horizons and Pruning

44 43

45 44 Why Pruning is Essential Each update introduces additional linear components to V. Each measurement squares the number of linear components. Thus, an unpruned value function for T=20 includes more than 10 547,864 linear functions. At T=30 we have 10 561,012,337 linear functions. The pruned value functions at T=20, in comparison, contains only 12 linear components. The combinatorial explosion of linear components in the value function are the major reason why POMDPs are impractical for most applications.

46 45 POMDP Summary POMDPs compute the optimal action in partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise linear and convex. In each iteration the number of linear constraints grows exponentially. POMDPs so far have only been applied successfully to very small state spaces with small numbers of possible observations and actions.

47 46 POMDP Approximations Point-based value iteration QMDPs AMDPs

48 47 Point-based Value Iteration Maintains a set of example beliefs Only considers constraints that maximize value function for at least one of the examples

49 48 Point-based Value Iteration Exact value function PBVI Value functions for T=30

50 49 QMDPs QMDPs only consider state uncertainty in the first step After that, the world becomes fully observable.

51 50

52 51 Augmented MDPs Augmentation adds uncertainty component to state space, e.g. Planning is performed by MDP in augmented state space Transition, observation and payoff models have to be learned


Download ppt "CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)"

Similar presentations


Ads by Google