Reinforcement Learning Dealing with Partial Observability

Slides:



Advertisements
Similar presentations
Markov Decision Process
Advertisements

Partially Observable Markov Decision Process (POMDP)
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Solving POMDPs Using Quadratically Constrained Linear Programs Christopher Amato.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
SARSOP Successive Approximations of the Reachable Space under Optimal Policies Devin Grady 4 April 2013.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Decision Theoretic Planning
Optimal Policies for POMDP Presented by Alp Sardağ.
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
What Are Partially Observable Markov Decision Processes and Why Might You Care? Bob Wall CS 536.
Partially Observable Markov Decision Process By Nezih Ergin Özkucur.
主講人:虞台文 大同大學資工所 智慧型多媒體研究室
Markov Decision Processes
Planning under Uncertainty
1 Policies for POMDPs Minqing Hu. 2 Background on Solving POMDPs MDPs policy: to find a mapping from states to actions POMDPs policy: to find a mapping.
POMDPs: Partially Observable Markov Decision Processes Advanced AI
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
KI Kunstmatige Intelligentie / RuG Markov Decision Processes AIMA, Chapter 17.
Markov Decision Processes
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Optimal Fixed-Size Controllers for Decentralized POMDPs Christopher Amato Daniel.
Markov Decision Processes
Department of Computer Science Undergraduate Events More
Reinforcement Learning Yishay Mansour Tel-Aviv University.
MDP Reinforcement Learning. Markov Decision Process “Should you give money to charity?” “Would you contribute?” “Should you give money to charity?” $
Instructor: Vincent Conitzer
MAKING COMPLEX DEClSlONS
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Computer Science CPSC 502 Lecture 14 Markov Decision Processes (Ch. 9, up to 9.5.3)
Model-based Bayesian Reinforcement Learning in Partially Observable Domains by Pascal Poupart and Nikos Vlassis (2008 International Symposium on Artificial.
CPSC 7373: Artificial Intelligence Lecture 10: Planning with Uncertainty Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
A Tutorial on the Partially Observable Markov Decision Process and Its Applications Lawrence Carin June 7,2006.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 21: Dynamic Multi-Criteria RL problems Dr. Itamar Arel College of Engineering Department.
Department of Computer Science Undergraduate Events More
1 (Chapter 3 of) Planning and Control in Stochastic Domains with Imperfect Information by Milos Hauskrecht CS594 Automated Decision Making Course Presentation.
Generalized Point Based Value Iteration for Interactive POMDPs Prashant Doshi Dept. of Computer Science and AI Institute University of Georgia
Partially Observable Markov Decision Process and RL
POMDPs Logistics Outline No class Wed
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Markov Decision Processes
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
Biomedical Data & Markov Decision Process
Markov ó Kalman Filter Localization
Markov Decision Processes
Planning to Maximize Reward: Markov Decision Processes
Markov Decision Processes
Instructors: Fei Fang (This Lecture) and Dave Touretzky
CS 188: Artificial Intelligence Fall 2007
Instructor: Vincent Conitzer
CSE 473: Artificial Intelligence
Markov Decision Problems
Chapter 17 – Making Complex Decisions
CMSC 471 – Fall 2011 Class #25 – Tuesday, November 29
Heuristic Search Value Iteration
CS 416 Artificial Intelligence
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

Reinforcement Learning Dealing with Partial Observability Subramanian Ramamoorthy School of Informatics 16 November, 2009

Problem Classes: Planning and Control Deterministic vs. stochastic actions Full vs. partial observability 16/11/2009 Reinforcement Learning

Deterministic, fully observable 16/11/2009 Reinforcement Learning

Stochastic, Fully Observable 16/11/2009 Reinforcement Learning

Markov Decision Process (MDP) 0.01 0.9 0.7 0.1 0.3 r=0 0.99 s3 0.3 s1 r=20 0.3 0.4 0.2 s5 s4 r=0 r=-10 0.8 16/11/2009 Reinforcement Learning

Reinforcement Learning T-step Policies Optimal policy: Value function: 16/11/2009 Reinforcement Learning

Reinforcement Learning Infinite Horizon Optimal policy: Bellman equation Fix point is optimal policy Necessary and sufficient condition 16/11/2009 Reinforcement Learning

Reinforcement Learning Value Iteration for all x do endfor repeat until convergence Endfor endrepeat 16/11/2009 Reinforcement Learning

Value Iteration for Motion Planning 16/11/2009 Reinforcement Learning

Reinforcement Learning POMDPs In POMDPs we apply the very same idea as in MDPs. Since the state is not observable, the agent has to make its decisions based on the belief state which is a posterior distribution over states. Let b be the belief of the agent about the state under consideration. POMDPs compute a value function over belief space: 16/11/2009 Reinforcement Learning

Reinforcement Learning Problems Each belief is a probability distribution, thus, each value in a POMDP is a function of an entire probability distribution. This is problematic, since probability distributions are continuous. Additionally, we have to deal with the huge complexity of belief spaces. For finite worlds with finite state, action, and measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions. 16/11/2009 Reinforcement Learning

An Illustrative Example measurements action u3 state x2 payoff actions u1, u2 state x1 16/11/2009 Reinforcement Learning

The Parameters of the Example The actions u1 and u2 are terminal actions. The action u3 is a sensing action that potentially leads to a state transition. The horizon is finite and =1. 16/11/2009 Reinforcement Learning

Reinforcement Learning Payoff in POMDPs In MDPs, the payoff (or return) depended on the state of the system. In POMDPs, however, the true state is not exactly known. Therefore, we compute the expected payoff by integrating over all states: 16/11/2009 Reinforcement Learning

Payoffs in Our Example (1) If we are totally certain that we are in state x1 and execute action u1, we receive a reward of -100 If, on the other hand, we definitely know that we are in x2 and execute u1, the reward is +100. In between it is the linear combination of the extreme values weighted by the probabilities 16/11/2009 Reinforcement Learning

Payoffs in Our Example (2) 16/11/2009 Reinforcement Learning

The Resulting Policy for T=1 Given we have a finite POMDP with T=1, we would use V1(b) to determine the optimal policy. In our example, the optimal policy for T=1 is This is the upper thick graph in the diagram. 16/11/2009 Reinforcement Learning

Piecewise Linearity, Convexity The resulting value function V1(b) is the maximum of the three functions at each point It is piecewise linear and convex. 16/11/2009 Reinforcement Learning

Reinforcement Learning Pruning If we carefully consider V1(b), we see that only the first two components contribute. The third component can therefore safely be pruned away from V1(b). 16/11/2009 Reinforcement Learning

Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. V1(b) 16/11/2009 Reinforcement Learning

Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. Suppose the robot perceives z1 for which p(z1 | x1)=0.7 and p(z1| x2)=0.3. Given the observation z1 we update the belief using Bayes rule. 16/11/2009 Reinforcement Learning

Reinforcement Learning Value Function V1(b) p’1 p1 b’(b|z1) V1(b|z1) 16/11/2009 Reinforcement Learning

Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. Suppose the robot perceives z1 for which p(z1 | x1)=0.7 and p(z1| x2)=0.3. Given the observation z1 we update the belief using Bayes rule. Thus V1(b | z1) is given by 16/11/2009 Reinforcement Learning

Expected Value after Measuring Since we do not know in advance what the next measurement will be, we have to compute the expected belief 16/11/2009 Reinforcement Learning

Expected Value after Measuring Since we do not know in advance what the next measurement will be, we have to compute the expected belief 16/11/2009 Reinforcement Learning

Resulting Value Function The four possible combinations yield the following function which then can be simplified and pruned. 16/11/2009 Reinforcement Learning

Reinforcement Learning Value Function p(z1) V1(b|z1) b’(b|z1) p(z2) V2(b|z2) 16/11/2009 Reinforcement Learning

State Transitions (Prediction) When the agent selects u3 its state potentially changes. When computing the value function, we have to take these potential state changes into account. 16/11/2009 Reinforcement Learning

State Transitions (Prediction) 16/11/2009 Reinforcement Learning

Resulting Value Function after executing u3 Taking the state transitions into account, we finally obtain. 16/11/2009 Reinforcement Learning

Value Function after executing u3 16/11/2009 Reinforcement Learning

Reinforcement Learning Value Function for T=2 Taking into account that the agent can either directly perform u1 or u2 or first u3 and then u1 or u2, we obtain (after pruning) 16/11/2009 Reinforcement Learning

Graphical Representation of V2(b) u1 optimal u2 optimal unclear outcome of measurement is important here 16/11/2009 Reinforcement Learning

Deep Horizons and Pruning We have now completed a full backup in belief space. This process can be applied recursively. The value functions for T=10 and T=20 are 16/11/2009 Reinforcement Learning

Deep Horizons and Pruning 16/11/2009 Reinforcement Learning

Reinforcement Learning 16/11/2009 Reinforcement Learning

Why Pruning is Essential Each update introduces additional linear components to V. Each measurement squares the number of linear components. Thus, an un-pruned value function for T=20 includes more than 10547,864 linear functions. At T=30 we have 10561,012,337 linear functions. The pruned value functions at T=20, in comparison, contains only 12 linear components. The combinatorial explosion of linear components in the value function are the major reason why POMDPs are impractical for most applications. 16/11/2009 Reinforcement Learning

Reinforcement Learning POMDP Approximations Point-based value iteration Augmented MDPs Many others… see research literature 16/11/2009 Reinforcement Learning

Point-based Value Iteration Maintains a set of example beliefs Only considers constraints that maximize value function for at least one of the examples 16/11/2009 Reinforcement Learning

Point-based Value Iteration Value functions for T=30 Exact value function PBVI 16/11/2009 Reinforcement Learning

Reinforcement Learning Example Application 16/11/2009 Reinforcement Learning

Reinforcement Learning Example Application 16/11/2009 Reinforcement Learning

Reinforcement Learning Augmented MDPs Augmentation adds uncertainty component to state space, e.g., Planning is performed by MDP in augmented state space Transition, observation and payoff models have to be learned 16/11/2009 Reinforcement Learning

Reinforcement Learning 16/11/2009 Reinforcement Learning

Reinforcement Learning 16/11/2009 Reinforcement Learning

Reinforcement Learning Summary POMDPs compute the optimal action in partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise linear and convex. In each iteration the number of linear constraints grows exponentially. POMDPs so far have only been applied successfully to very small state spaces with small numbers of possible observations and actions. Approximation methods and other alternatives to problem formulation/representation are focus of current research Acknowledgement: The slides for this lecture are taken from the Probabilistic Robotics web site (http://robots.stanford.edu/probabilistic-robotics/) 16/11/2009 Reinforcement Learning