Presentation is loading. Please wait.

Presentation is loading. Please wait.

Generalized and Bounded Policy Iteration for Finitely Nested Interactive POMDPs: Scaling Up Ekhlas Sonu, Prashant Doshi Dept. of Computer Science University.

Similar presentations


Presentation on theme: "Generalized and Bounded Policy Iteration for Finitely Nested Interactive POMDPs: Scaling Up Ekhlas Sonu, Prashant Doshi Dept. of Computer Science University."— Presentation transcript:

1 Generalized and Bounded Policy Iteration for Finitely Nested Interactive POMDPs: Scaling Up Ekhlas Sonu, Prashant Doshi Dept. of Computer Science University of Georgia AAMAS 2012

2 Overview We generalize Bounded Policy Iteration for POMDP to the multiagent decision making framework of Interactive POMDP We discuss the challenges associated with this generalization Substantial scalability achieved using the generalized approach

3 Introduction: Interactive POMDP Interactive POMDP (Gmytrasiewicz&Doshi,05) : Generalization of POMDP to multiagent settings Applications Money Laundering (Ng et al.,10) Lemonade stand game (Wunder et al.,11) Modeling human behavior (Doshi et al.,10), and more… Differs from Dec-POMDP Dec-POMDP: Team of agents I-POMDP: Individual agent in presence of other agents – cooperative, competitive or neutral settings

4 Introduction: I-POMDP (Finitely-nested and 2 agents) I-POMDP i,l = Physical States (S) i a i /T i (s, a i, a j, s’) o i /O i (s’, a i, a j, o i ), R i (s, a i, a j ) j o j /O j (s’, a i, a j, o j ), R j (s, a i, a j ) a j /T j (s, a i, a j, s’) IS i,l = S X  j,l-1 S: Set of physical states  j,l-1 : Set of intentional models of j at level l-1 A = A i X A j  i : set of observations of i T i : S X A i X A j   S O i : S X A i X A j   i R i : S X A i X A j  R Interactive state

5 I-POMDP Belief Update and Value Function Belief Update: An agent must predict the other agent’s actions by anticipating its updated beliefs over time. Therefore belief update consists of Updating distribution over physical states: Transition Function, Observation Function of agent i Updating distribution over dynamic models: Belief update of other agents and its observation function Value Function: Must incorporate the I-POMDP belief update in computing long term rewards

6 Solving I-POMDP (Related Work) Previous work: Value iteration algorithms Interactive particle filtering (I-PF) (Doshi&Gmytrasiewicz,09) nested particle filter: sampled recursive representation of agent’ nested belief Interactive point-based value iteration (I-PBVI) (Doshi&Perez,08) point based domination check Iteratively apply Backup Operator: Expensive operator Scale only to toy problems Over multiple time steps: Curse of history Curse of dimensionality Phy. St. (S) b =  (IS i,l )

7 Background Policy Iteration Class of solution algorithms – search policy space Exponential growth in solution size Bounded Policy Iteration (Poupart&Boutilier,03) Fixed solution size (controlled growth) Applied in POMDP & Dec-POMDP Dec-BPI (Bernstein,Hansen&Zilberstein,05) -- optional correlation device may not be feasible in non-cooperative settings Contribution: We present the first policy iteration algorithm (approximate) for I-POMDPs : generalization of BPI Show scalability to larger problems

8 Policy Representation Possible representation of policy Node  action Edge  obs Finite State Controllers (Hansen, 1998) Tree Representation Node has an infinite horizon policy rooted at it Node has a value vector associated with it which is a linear vector over the entire belief space Beliefs are mapped to a node (n) that optimizes the expected reward from that belief: i.e. argmax n b ∙ V n

9 Finite State Controller A finite state controller may be defined as: where: is the set of nodes in the FSC of agent i is the set of edge labels (  i ) Let: partitions the entire belief space

10 Policy Iteration Starting with an initial controller, iterate over two steps until convergence: Policy Evaluation: Evaluate V n for each node Solve system of linear equations Policy Improvement: Construct a better controller Possibly by adding new nodes

11 Policy Improvement (Hansen,98) Apply Backup operator, i.e. construct new nodes with all possible values of action and transition on observation |A||N| |  | new nodes Add them to the controller Prune all dominated nodes Drawback: Leads to exponential growth in controller size V 01 P(s) Example of policy iteration for a POMDP

12 Bounded Policy Iteration (BPI) (Poupart&Boutilier,03) Instead of performing a complete back up, replace a node with a better node Linear program for partial backup New node is a convex combination of two backed up nodes Changes in controller:  :stochastic action policy :stochastic observation policy

13 Local Optima This form of policy improvement is prone to converging to local optima When all nodes are tangents to backed up nodes:  = 0, no improvement Escape technique suggested by Poupart & Boutilier (2003) in BPI V 01 P(s)

14 I-POMDP Generalization: Nested Controllers Nested Controllers: Analogous to nested beliefs Embed recursive reasoning Starting from level 0 upwards, for each level l, construct a Finite state controller for each frame of each agent ( ) For convenience of representation, let’s assume two agents and each one frame for an agent at each level Agent i’s level 2 controller: Agent j’s level 1 controller: Agent i’s level 0 controller:

15 Interactive BPI: Policy Evaluation Compute the value vector of each node using the estimate of other agent’s model by solving a system of linear equations: For each n i,l, and interactive state, is=(s, n j,l-1 ), solve:

16 I-BPI: Policy Improvement New vector dominates old vector by  and hence replaces it  V 01P(s) Pick a node (n i,l ) and perform a partial backup using LP to construct another node (n’ i,l ) that pointwise dominates n i,l by some  > 0

17 I-BPI: Policy Improvement Pick a node (n i,l ) and perform a partial backup using LP to construct another node that pointwise dominates n i,l by some  > 0 Objective Function: maximize  Variables: Constraints:

18 Escaping Local Optima V 01 P(s) bTbT bR1bR1 bR2bR2 Analogous to escaping for POMDPs

19 Algorithm: I-BPI 1.Starting from Level 0 up to Level l, construct a 1 node controller for each level with a random action and transition to itself. 2.Reformulate interactive state space and evaluate L0 L1 LlLl Time......

20 Algorithm: I-BPI 3.Starting from Level 0 up to Level l, perform 1 step of back up operator. Max |A i(j) | nodes L0 L1 LlLl Time......

21 Algorithm: I-BPI 4.Starting from Level 0 up to Level l, reformulate IS space, perform policy evaluation followed by policy improvement at each level L0 L1 LlLl Time......

22 Algorithm: I-BPI 5.Repeat step 4 until convergence 6.If converged, push nested controller out of local optima by adding new nodes L0 L1 LlLl Time......

23 Evaluation Runtime for algorithm and the average rewards from simulations * Represents expected rewards obtained from vectors AUAV: 81 states, 5 actions, 4 observations Money Laundering: 99 States, 11 actions, 9 Observations Scales to larger problems...

24 Evaluation Simulations results for multiagent tiger problem showing results obtained by simulating performance of agent controllers of various sizes for Levels 1 – 4

25 Discussion Advantages of I-BPI Is significantly quicker and scales to large problems (100s of states, tens of actions and observations) Mitigates curse of history and curse of dimensionality Improved solution quality Limitations Prone to local optima Escape technique may not work for certain local optima Not entirely free from curses of history and dimensionality Future Work Scale to even larger problems and more agents Mealy machine implementation for controllers (Amato et al. 2011)

26 Thank you… Poster #731 today at 16:00-17:00 (Panel 98) Acknowledgement: This research is partially supported by an NSF CAREER grant, #IIS-0845036

27 Policy Improvement Apply Backup operator, i.e. construct new nodes with all possible values of action and transition to nodes in current controller |A||N| |Z| new nodes Add them to the controller |A| Z1Z1 Z2Z2 Z |Z| |N|

28 Introduction: POMDP POMDP: Framework for optimal sequential decision making under uncertainty in single agent settings Physical States (S) a/T(s, a, s’) z/O(s’, a, z), R(s, a) b =  (S) S: set of states A: set of actions Z: set of observations T: S X A   S O: S X A   Z R: S X A  R Objective is to find a policy  that maximizes long term expected rewards: ER = Immediate Reward + discounted future reward Agent maintains a belief (b) over physical states Policy  : b  A  : discount factor h: Horizon

29 Future Work Extend approach to problems with even larger dimensions Extend to problems with more than two agents Mealy machine implementation of finite state controllers (Amato, et.al; 2011)

30 I-POMDP Belief Update and Value Function Belief Update: An agent must predict the other agent’s actions by anticipating its updated beliefs over time. Therefore belief update consists of Updating distribution over physical states: Transition Function, Observation Function of agent i Updating distribution over dynamic models: Belief update of other agents and its observation function Value Function:

31 Solving I-POMDP (Related Work) Previous work: Value iteration algorithms I-PF (Doshi, Gmytrasiewicz; 2009): particle filter: sampled recursive representation of agent’ nested belief I-PBVI (Doshi, Perez; 2008): point based domination check Iteratively apply Backup Operator: Expensive operator Over multiple time steps: Curse of history Curse of dimensionality Phy. St. (S) s, a/T(s, a, s’) s’/O(s’, a, z), R(s, a) b =  (IS i,l )

32 I-POMDP Generalization: Nested Controllers Embed recursive reasoning Starting from level 0 upwards, for each level l, construct a Finite state controller for each frame of each agent ( ) For convenience of representation, let’s assume two agents and each one frame for an agent at each level L 0: L 1:........................ L l:

33 I-POMDP Generalization: Nested Controllers Embed recursive reasoning Starting from level 0 upwards, for each level l, construct a Finite state controller for each frame of each agent ( ) For convenience of representation, let’s assume two agents and each one frame for an agent at each level


Download ppt "Generalized and Bounded Policy Iteration for Finitely Nested Interactive POMDPs: Scaling Up Ekhlas Sonu, Prashant Doshi Dept. of Computer Science University."

Similar presentations


Ads by Google