Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Models for Multi-Agent Coordination Maayan Roth Multi-Robot Reading Group April 13, 2005.

Similar presentations


Presentation on theme: "Markov Models for Multi-Agent Coordination Maayan Roth Multi-Robot Reading Group April 13, 2005."— Presentation transcript:

1 Markov Models for Multi-Agent Coordination Maayan Roth Multi-Robot Reading Group April 13, 2005

2 Multi-Agent Coordination Teams: – Agent work together to achieve a common goal – No individual motivations Objective: – Generate policies (individually or globally) to yield best team performance

3 Observability [Pynadath and Tambe, 2002] “Observability” - degree to which agents can, either individually or as a team, identify current world state – Individual observability MMDP [Boutilier, 1996] – Collective observability O 1 + O 2 + … + O n uniquely identify the state e.g. [Xuan, Lesser, Zilberstein, 2001] – Collective partial observability DEC-POMDP [Bernstein et al., 2000] POIPSG [Peskin, Kim, Meuleau, Kaelbling, 2000] – Non-observability

4 Communication [Pynadath and Tambe, 2002] “Communication” - explicit message-passing from one agent to another – Free Communication No cost to send messages Transforms MMDP to MDP, DEC-POMDP to POMDP – General Communication Communication is available but has cost or is limited – No Communication No explicit message-passing

5 Taxonomy of Cooperative MAS No Communication General Communication Free Communication Full Observability Collective Observability Partial Observability Unobservability Open-loop control? MMDP MDP POMDP DEC-MDPDEC-POMDP

6 Complexity Individually Observable Collectively Observable Collectively Partially Observable No Comm.P-completeNEXP-complete General Comm.P-completeNEXP-complete Free Comm.P-complete PSPACE-complete

7 Multi-Agent MDP (MMDP) [Boutilier, 1996] Also called IPSG (identical payoff stochastic games) M = – S is set of possible world states – {A i } i  m is set of joint actions, where a i  A i – T defines transition probabilities over joint actions – R is team reward function State is fully observable by each agent P-complete

8 Coordination Problems Multiple equilibria – two optimal joint policies s1 s2 s3 s4 s5 s6 +10 -10 +5 ;

9 Randomization with Learning Expand MMDP to include FSM that selects actions based on whether agents are coordinated or uncoordinated Build the expanded MMDP by adding a coordination mechanism at every state where a coordination problem is discovered U A B random(a,b) b a ;

10 Multi-Agent POMDP DEC-POMDP [Bernstein et al., 2000], MTDP [Pynadath et al., 2002], POIPSG [Peshkin et al., 2000] M = – S is set of possible world states – {A i } i  m is set of joint actions, where a i  A i – T defines transition probabilities over joint actions – {  i } i  m is set of joint observations, where  i   i – O defines observation probabilities over joint actions and joint observations – R is team reward function

11 Complexity [Bernstein, Zilberstein, Immerman, 2000] For all m  2, DEC-POMDP with m agents is NEXP-complete – “For a given DEC-POMDP with finite time horizon T, and an integer K, is there a policy for which yields a total reward  K?” – Agents must reason about all possible actions and observations of their teammates For all m  3, DEC-MDP with m agents is NEXP-complete

12 Dynamic Programming [Hansen, Bernstein, Zilberstein, 2004] Finds optimal solution Partially observable stochastic game (POSG) framework – Iterate between DP step and pruning step to remove dominated strategies Experimental results show speed-up in some domains – Still NEXP-complete, so hyper-exponential in worst case No communication

13 Transition Independence [Becker, Zilberstein, Lesser, Goldman, 2003] DEC-MDP – collective observability Transition independence: – Local state transitions Each agent observes local state Individual actions only affect local state transitions – Team connected through joint reward Coverage set algorithm – finds optimal policy quickly in experimental domains No communication

14 Efficient Policy Computation [Nair, Pynadath, Yokoo, Tambe, Marsella, 2003] Joint Equilibrium-Based Search for Policies – initialize random policies for all agents – for each agent i: fix the policies for all other agents find the optimal policy for agent i – repeat until no policies change Finds locally optimal policies with potentially exponential speedups over finding the global optimum No communication

15 Communication [Pynadath and Tambe, 2002] COM-MTDP framework for analyzing communication – Enhance model with {  i } i  m where  i is the set of possible messages agent i can send – Reward function includes cost (  0) of communication If communication is free: – DEC-POMDP reducible to single-agent POMDP (PSPACE- complete) – Optimal communication policy is to communicate at every time step Under general communication (no restrictions on cost), DEC-POMDP still NEXP-complete –  i may contain every possible observation history

16 COMM-JESP [Nair, Roth, Yokoo, Tambe, 2004] Add SYNC action to domain – If one agent chooses SYNC, all other agents SYNC – At SYNC, send entire observation history since last SYNC SYNC brings agents to synchronized belief over world states Policies indexed by root synchronized belief and observation history since last SYNC t=0 (SL () 0.5) (SR () 0.5) (SL (HR) 0.1275) (SL (HL) 0.7225) (SR (HR) 0.1275) (SR (HL) 0.0225) (SL (HR) 0.0225) (SL (HL) 0.1275) (SR (HR) 0.7225) (SR (HL) 0.1275) a = {Listen, Listen}  = HL  = HR a = SYNC t = 2 (SL () 0.5) (SR () 0.5) t = 2 (SL () 0.97) (SR () 0.03) “At-most K” heuristic – there must be a SYNC within at most K timesteps

17 Dec-Comm [Roth, Simmons, Veloso, 2005] At plan-time, assume communication is free – generate joint policy using single-agent POMDP- solver Reason over possible joint beliefs to execute policy Use communication to integrate local observations into team belief

18 Tiger Domain: (States, Actions) Two-agent tiger problem [Nair et al., 2003]: S: {SL, SR} Tiger is either behind left door or behind right door Individual Actions: a i  {OpenL, OpenR, Listen} Robot can open left door, open right door, or listen

19 Tiger Domain: (Observations) Individual Observations:  I  {HL, HR} Robot can hear tiger behind left door or hear tiger behind right door Observations are noisy and independent.

20 Tiger Domain: (Reward) Coordination problem – agents must act together for maximum reward Maximum reward (+20) when both agents open door with treasure Minimum reward (-100) when only one agent opens door with tiger Listen has small cost (-1 per agent) Both agents opening door with tiger leads to medium negative reward (-50)

21 Joint Beliefs Joint belief (b t ) – distribution over world states Why compute possible joint beliefs? – action coordination – transition and observation functions depend on joint action – agent can’t accurately estimate belief if joint action is unknown To ensure action coordination, agents can only reason over information known by all teammates

22 Possible Joint Beliefs b: P(SL) = 0.5 p: p(b) = 1.0  = {} L0L0 a = HL b: P(SL) = 0.8 p: p(b) = 0.29  = {HL,HL} b: P(SL) = 0.5 p: p(b) = 0.21  = {HL,HR} b: P(SL) = 0.5 p: p(b) = 0.21  = {HR,HL} b: P(SL) = 0.2 p: p(b) = 0.29  = {HR,HR} L1L1 How should agents select actions over joint beliefs?

23 Q-POMDP Heuristic Select joint action over possible joint beliefs Q-MDP (Littman et. al., 1995) – approximate solution to large POMDP using underlying MDP Q-POMDP – approximate solution to DEC-POMDP using underlying single-agent POMDP

24 Q-POMDP Heuristic b: P(SL) = 0.5 p: p(b) = 0.21  = {HR,HL} b: P(SL) = 0.8 p: p(b) = 0.29  = {HL,HL} b: P(SL) = 0.5 p: p(b) = 0.21  = {HL,HR} b: P(SL) = 0.5 p: p(b) = 1.0  = {} b: P(SL) = 0.2 p: p(b) = 0.29  = {HR,HR} Choose joint action by computing expected reward over all leaves Agents will independently select same joint action… but action choice is very conservative (always ) DEC-COMM: Use communication to add local observations to joint belief

25 Dec-Comm Example b: P(SL) = 0.5 p: p(b) = 0.21  = {HR,HL} b: P(SL) = 0.8 p: p(b) = 0.29  = {HL,HL} b: P(SL) = 0.5 p: p(b) = 0.21  = {HL,HR} b: P(SL) = 0.5 p: p(b) = 1.0  = {} b: P(SL) = 0.2 p: p(b) = 0.29  = {HR,HR} HL L1L1 a NC = Q-POMDP(L 1 ) = L* = circled nodes a C = Q-POMDP(L * ) = Don’t communicate

26 Dec-Comm Example (cont’d) b: P(SL) = 0.85 p: p(b) = 0.29  = {HL,HL} b: P(SL) = 0.5 p: p(b) = 1.0  = {} L1L1 b: P(SL) = 0.5 p: p(b) = 0.21  = {HL,HR} b: P(SL) = 0.5 p: p(b) = 0.21  = {HR,HL} b: P(SL) = 0.15 p: p(b) = 0.29  = {HR,HR} … b: P(SL) = 0.97 p: p(b) = 0.12  : {, } b: P(SL) = 0.85 p: p(b) = 0.06  : {, } b: P(SL) = 0.85 p: p(b) = 0.06  : {, } b: P(SL) = 0.5 p: p(b) = 0.04  : {, } b: P(SL) = 0.85 p: p(b) = 0.06  : {, } b: P(SL) = 0.5 p: p(b) = 0.04  : {, } b: P(SL) = 0.5 p: p(b) = 0.04  : {, } b: P(SL) = 0.16 p: p(b) = 0.06  : {, } L2L2 a = a NC = Q-POMDP(L 2 ) = L* = circled nodes V(a C ) - V(a NC ) > ε Agent 1 communicates a C = Q-POMDP(L*) =

27 Dec-Comm Example (cont’d) b: P(SL) = 0.85 p: p(b) = 0.29  = {HL,HL} b: P(SL) = 0.5 p: p(b) = 1.0  = {} L1L1 b: P(SL) = 0.5 p: p(b) = 0.21  = {HL,HR} b: P(SL) = 0.5 p: p(b) = 0.21  = {HR,HL} b: P(SL) = 0.15 p: p(b) = 0.29  = {HR,HR} … b: P(SL) = 0.97 p: p(b) = 0.12  : {, } b: P(SL) = 0.85 p: p(b) = 0.06  : {, } b: P(SL) = 0.85 p: p(b) = 0.06  : {, } b: P(SL) = 0.5 p: p(b) = 0.04  : {, } b: P(SL) = 0.85 p: p(b) = 0.06  : {, } b: P(SL) = 0.5 p: p(b) = 0.04  : {, } b: P(SL) = 0.5 p: p(b) = 0.04  : {, } b: P(SL) = 0.16 p: p(b) = 0.06  : {, } L2L2 a = Agent 1 communicates Agent 2 communicates Q-POMDP(L 2 ) = Agents open right door!

28 References Becker, R., Zilberstein, S., Lesser, V., Goldman, C. V. Transition-independent decentralized Markov decision processes. In International Joint Conference on Autonomous Agents and Multi-Agent Systems, 2003. Bernstein, D., Zilberstein, S., Immerman, N. The complexity of decentralized control of Markov decision processes. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 2000. Boutilier, C. Sequential optimality and coordination in multiagent systems. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, 1999. Hansen, E. A., Bernstein, D. S., Zilberstein, S. Dynamic programming for partially observable stochastic games. In National Conference on Artificial Intelligence, 2004. Nair, R., Roth, M., Yokoo, M., Tambe, M. Communication for improving policy computation in distributed POMDPs. To appear in Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, 2004. Nair, R., Pynadath, D., Yokoo, M., Tambe, M., Marsella, S. Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In Proceedings of International Joint Conference on Artificial Intelligence, 2003. Peshkin, L., Kim, K.-E., Meuleau, N., Kaelbling, L. P. Learning to cooperate via policy search. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2000. Pynadath, D. and Tambe, M. The communicative multiagent team decision problem: Analyzing teamwork theories and models. In Journal of Artificial Intelligence Research, 2002. Roth, M., Simmons, R., Veloso, M. Reasoning about joint beliefs for execution-time communication decisions. To appear in International Joint Conference on Autonomous Agents and Multi-Agent Systems, 2005. Xuan, P., Lesser, V., Zilberstein, S. Communication decisions in multi-agent cooperation: Model and experiments. In Proceedings of the International Joint Conference on Artificial Intelligence, 2001.


Download ppt "Markov Models for Multi-Agent Coordination Maayan Roth Multi-Robot Reading Group April 13, 2005."

Similar presentations


Ads by Google