CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.

Slides:



Advertisements
Similar presentations
Markov Decision Process
Advertisements

Partially Observable Markov Decision Process (POMDP)
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Decision Theoretic Planning
Optimal Policies for POMDP Presented by Alp Sardağ.
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Infinite Horizon Problems
Planning under Uncertainty
1 Policies for POMDPs Minqing Hu. 2 Background on Solving POMDPs MDPs policy: to find a mapping from states to actions POMDPs policy: to find a mapping.
POMDPs: Partially Observable Markov Decision Processes Advanced AI
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
Reinforcement Learning
Markov Decision Processes CSE 473 May 28, 2004 AI textbook : Sections Russel and Norvig Decision-Theoretic Planning: Structural Assumptions.
Markov Decision Processes
Reinforcement Learning Mitchell, Ch. 13 (see also Barto & Sutton book on-line)
4/1 Agenda: Markov Decision Processes (& Decision Theoretic Planning)
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Making Decisions CSE 592 Winter 2003 Henry Kautz.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Instructor: Vincent Conitzer
MAKING COMPLEX DEClSlONS
REINFORCEMENT LEARNING LEARNING TO PERFORM BEST ACTIONS BY REWARDS Tayfun Gürel.
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Computer Science CPSC 502 Lecture 14 Markov Decision Processes (Ch. 9, up to 9.5.3)
Model-based Bayesian Reinforcement Learning in Partially Observable Domains by Pascal Poupart and Nikos Vlassis (2008 International Symposium on Artificial.
CPSC 7373: Artificial Intelligence Lecture 10: Planning with Uncertainty Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Solving POMDPs through Macro Decomposition
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
A Tutorial on the Partially Observable Markov Decision Process and Its Applications Lawrence Carin June 7,2006.
MDPs (cont) & Reinforcement Learning
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley.
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
CSE 473Markov Decision Processes Dan Weld Many slides from Chris Bishop, Mausam, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
COMP 2208 Dr. Long Tran-Thanh University of Southampton Reinforcement Learning.
Markov Decision Processes Chapter 17 Mausam. Planning Agent What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable.
1 (Chapter 3 of) Planning and Control in Stochastic Domains with Imperfect Information by Milos Hauskrecht CS594 Automated Decision Making Course Presentation.
Planning Under Uncertainty. Sensing error Partial observability Unpredictable dynamics Other agents.
Reinforcement Learning Guest Lecturer: Chengxiang Zhai Machine Learning December 6, 2001.
Def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() replaceMe =.0001 for i in range(0, numIterations): hypothesis = np.dot(x,
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CS 5751 Machine Learning Chapter 13 Reinforcement Learning1 Reinforcement Learning Control learning Control polices that choose optimal actions Q learning.
Reinforcement Learning (1)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Reinforcement learning (Chapter 21)
Markov Decision Processes
Markov Decision Processes
Reinforcement learning
CS 188: Artificial Intelligence Fall 2007
CSE 473: Artificial Intelligence
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2008
CS 416 Artificial Intelligence
Reinforcement Learning Dealing with Partial Observability
Reinforcement Learning (2)
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Markov Decision Processes
Reinforcement Learning (2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

CSE-573 Reinforcement Learning POMDPs

Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect vs. Noisy Deterministic vs. Stochastic Discrete vs. Continuous Outcomes Predictable vs. Unpredictable

Classical Planning What action next? PerceptsActions Environment Static Fully Observable Perfect Predictable Discrete Deterministic

Stochastic Planning What action next? PerceptsActions Environment Static Fully Observable Perfect Stochastic Unpredictable Discrete

Markov Decision Process (MDP)  S : A set of states  A : A set of actions  P r(s’|s,a): transition model  R (s,a,s’): reward model  s 0 : start state   : discount factor

Objective of a Fully Observable MDP  Find a policy  : S → A  which maximizes expected discounted reward  given an infinite horizon  assuming full observability

 Define V*(s) {optimal value} as the maximum expected discounted reward from this state.  V* should satisfy the following equation:  V*(s) = max a Q*(s,a)  Solve using value/policy iteration, etc. Bellman Equations for MDPs Q*(s,a)

Reinforcement Learning  Still have an MDP Still looking for policy   New twist: don’t know P r and/or R i.e. don’t know which states are good And what actions do  Must actually try actions and states out to learn

Model based methods  Visit different states, perform different actions  Estimate P r and R  Once model built, do planning using V.I. or other methods  Cons: require _huge_ amounts of data

Model free methods  TD learning  Directly learn Q*(s,a) values  sample = R (s,a,s’) +  max a’ Q n (s’,a’)  Nudge the old estimate towards the new sample  Q n+1 (s,a)  (1-  )Q n (s,a) +  [sample]

Properties  Converges to optimal if If you explore enough If you make learning rate (  ) small enough But not decrease it too quickly

Exploration vs. Exploitation   - greedy Each time step flip a coin With prob , action randomly With prob 1-  take the current greedy action  Lower  over time to increase exploitation as more learning has happened

Q-learning  Problems Too many states to visit during learning Q(s,a) is a BIG table  We want to generalize from small set of training examples  Solutions Value function approximators Policy approximators Hierarchical Reinforcement Learning

Task Hierarchy: MAXQ Decomposition [Dietterich’00] Root TakeGive Navigate(loc) DeliverFetch Extend-arm GrabRelease Move e Move w Move s Move n Children of a task are unordered

MAXQ Decomposition  Augment the state s by adding the subtask i: [s,i].  Define C ([s,i],j) as the reward received in i after j finishes.  Q ([s,Fetch],Navigate(prr)) = V ([s,Navigate(prr)])+ C ([s,Fetch],Navigate(prr))  Express V in terms of C  Learn C, instead of learning Q Reward received while navigating Reward received after navigation

MAXQ Decomposition (contd)  State Abstraction Finding irrelevant actions Finding funnel actions

POMDPs What action next? PerceptsActions Environment Static Partially Observable Noisy Stochastic Unpredictable Discrete

Deterministic, fully observable

Stochastic, Fully Observable

Stochastic, Partially Observable

21 POMDPs In POMDPs we apply the very same idea as in MDPs. Since the state is not observable, the agent has to make its decisions based on the belief state which is a posterior distribution over states. Let b be the belief of the agent about the state under consideration. POMDPs compute a value function over belief space: γ

22 Problems Each belief is a probability distribution, thus, each value in a POMDP is a function of an entire probability distribution. This is problematic, since probability distributions are continuous. Additionally, we have to deal with the huge complexity of belief spaces. For finite worlds with finite state, action, and measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions.

23 An Illustrative Example measurementsaction u 3 state x 2 payoff measurements actions u 1, u 2 payoff state x 1

24 The Parameters of the Example The actions u 1 and u 2 are terminal actions. The action u 3 is a sensing action that potentially leads to a state transition. The horizon is finite and =1.

25 Payoff in POMDPs In MDPs, the payoff (or return) depended on the state of the system. In POMDPs, however, the true state is not exactly known. Therefore, we compute the expected payoff by integrating over all states:

26 Payoffs in Our Example (1) If we are totally certain that we are in state x 1 and execute action u 1, we receive a reward of -100 If, on the other hand, we definitely know that we are in x 2 and execute u 1, the reward is In between it is the linear combination of the extreme values weighted by the probabilities

27 Payoffs in Our Example (2)

28 The Resulting Policy for T=1 Given we have a finite POMDP with T=1, we would use V 1 (b) to determine the optimal policy. In our example, the optimal policy for T=1 is This is the upper thick graph in the diagram.

29 Piecewise Linearity, Convexity The resulting value function V 1 (b) is the maximum of the three functions at each point It is piecewise linear and convex.

30 Pruning If we carefully consider V 1 (b), we see that only the first two components contribute. The third component can therefore safely be pruned away from V 1 (b).

31 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. V 1 (b)

32 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. Suppose the robot perceives z 1 for which p(z 1 | x 1 )=0.7 and p(z 1 | x 2 )=0.3. Given the observation z 1 we update the belief using Bayes rule.

33 Value Function b’(b|z 1 ) V 1 (b) V 1 (b|z 1 )

34 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. Suppose the robot perceives z 1 for which p(z 1 | x 1 )=0.7 and p(z 1 | x 2 )=0.3. Given the observation z 1 we update the belief using Bayes rule. Thus V 1 (b | z 1 ) is given by

35 Expected Value after Measuring Since we do not know in advance what the next measurement will be, we have to compute the expected belief

36 Expected Value after Measuring Since we do not know in advance what the next measurement will be, we have to compute the expected belief

37 Resulting Value Function The four possible combinations yield the following function which then can be simplified and pruned.

38 Value Function b’(b|z 1 ) p(z 1 ) V 1 (b|z 1 ) p(z 2 ) V 2 (b|z 2 ) \bar{V} 1 (b)

39 State Transitions (Prediction) When the agent selects u 3 its state potentially changes. When computing the value function, we have to take these potential state changes into account.

40 Resulting Value Function after executing u 3 Taking the state transitions into account, we finally obtain.

41 Value Function after executing u 3 \bar{V} 1 (b) \bar{V} 1 (b|u 3 )

42 Value Function for T=2 Taking into account that the agent can either directly perform u 1 or u 2 or first u 3 and then u 1 or u 2, we obtain (after pruning)

43 Graphical Representation of V 2 (b) u 1 optimal u 2 optimal unclear outcome of measuring is important here

44 Deep Horizons and Pruning We have now completed a full backup in belief space. This process can be applied recursively. The value functions for T=10 and T=20 are

45 Deep Horizons and Pruning

46

47 Why Pruning is Essential Each update introduces additional linear components to V. Each measurement squares the number of linear components. Thus, an unpruned value function for T=20 includes more than ,864 linear functions. At T=30 we have ,012,337 linear functions. The pruned value functions at T=20, in comparison, contains only 12 linear components. The combinatorial explosion of linear components in the value function are the major reason why POMDPs are impractical for most applications.

48 POMDP Summary POMDPs compute the optimal action in partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise linear and convex. In each iteration the number of linear constraints grows exponentially. POMDPs so far have only been applied successfully to very small state spaces with small numbers of possible observations and actions.

49 POMDP Approximations Point-based value iteration QMDPs AMDPs

50 Point-based Value Iteration Maintains a set of example beliefs Only considers constraints that maximize value function for at least one of the examples

51 Point-based Value Iteration Exact value function PBVI Value functions for T=30

52 QMDPs QMDPs only consider state uncertainty in the first step After that, the world becomes fully observable.

53

54 Augmented MDPs Augmentation adds uncertainty component to state space, e.g. Planning is performed by MDP in augmented state space Transition, observation and payoff models have to be learned