Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Planning in Hierarchical Factored MDPs Carlos Guestrin Stanford University Geoffrey Gordon Carnegie Mellon University.

Similar presentations


Presentation on theme: "Distributed Planning in Hierarchical Factored MDPs Carlos Guestrin Stanford University Geoffrey Gordon Carnegie Mellon University."— Presentation transcript:

1 Distributed Planning in Hierarchical Factored MDPs Carlos Guestrin Stanford University Geoffrey Gordon Carnegie Mellon University

2 Multiagent Coordination Examples Search and rescue Factory management Supply chain Firefighting Network routing Air traffic control Access only local information Distributed Control Distributed Planning

3 Hierarchical Decomposition EngineSteering Chassis Exhaust Injection Cylinders Part-of Subsystems can share variables Each subsystem only observes its local variables Parallel decomposition ! exponential state space

4 Outline Object-based Representation Hierarchical Factored MDPs Distributed planning Message passing algorithm based on LP decomposition Hierarchical action selection mechanism Limited observability and communication Reusing plans and computation Exploit classes of objects

5 R G Basic Subsystem MDP I’  ’’ Internal variables I Speed control S External variables Actions Subsystem j decomposed: Internal variables X j External variables Y j Actions A j Subsystem model: Rewards - R j (X j, Y j, A j ) Transitions - P j (X j ’ | X j, Y j, A j ) Subsystem can be modeled with any representation

6 Hierarchical Subsystem Tree Subsystem tree: Nodes are subsystems Hierarchical decomposition Tree reward = sum subsystem rewards Consistent subsystem tree: Running intersection property Consistent dynamics Lemma: consistent subsystem tree yields well-defined global MDP M 2 Speed control  I G S M 1 Transmission  G C M 3 Cooling  F T SepSet[M 2 ]: { G,  } SepSet[M 3 ]: {  }

7 Relationship to Factored MDPs A2A2 A1A1 X1X1 R1R1 X3X3 X2X2 X’ 3 X’ 2 X’ 1 h2h2 h1h1 R2R2 R3R3 A2A2 A1A1 X3X3 X2X2 X’ 3 X’ 2 R2R2 R3R3 A1A1 X1X1 R1R1 X2X2 X’ 1 X1X1 X2X2 X1X1 A1A1 M2M2 M1M1 SepSet[M 2 ] Multiagent Factored MDP [Guestrin et al. ’01] Hierarchical Factored MDP Representational power equivalent Hierarchical factored MDP  multiagent factored MDP with particular choice of basis functions New capabilities Fully distributed planning algorithm Reuse for knowledge representation Reuse of computation MDP counterpart to Object-Oriented Bayes Nets (OOBNs) [Koller and Pfeffer ’97]

8 Planning for Hierarchical Factored MDPs Action space: joint action a= {a1,…, an} for all subsystems State space: joint state x of entire system Reward function: total reward r Action and state spaces are exponential in # subsystems Exploit hierarchical structure Efficient, distributed approximate planning algorithm Simple message passing approach Each subsystem accesses only its local model Each local model solved by any standard MDP algorithm

9 Solving MDPs as LPs Bellman constraint: if x  a y with reward r, V(x)  V(y) + r = Q(a, x) Similarly for stochastic transitions Optimal V* satisfies all Bellman constraints, and is componentwise smallest min V(x)+V(y)+V(z)+V(g) st V(x)  V(y)+1 V(y)  V(g)+3 V(x)  V(z)+2 V(z)  V(g)+1

10 Linear combination of restricted domain functions [Bellman et al. ’63] [Schweitzer & Seidmann ’85] [Tsitsiklis & Van Roy ’96] [Koller & Parr ’99,’00] [Guestrin et al. ’01] Decomposable Value Functions Each h i is status of small part(s) of a complex system: Status of a machine and neighbors Load on machine Must find w giving good approximate value function Well-designed h i  exponentially fewer parameters

11 Approximate Linear Programming To solve subsystem tree MDP as LP Overall state is cross-product of subsystem states Bellman LP has exponentially many constraints, variables  we need to approximate Write V(x) = V 1 (X 1 ) + V 2 (X 2 ) +... Minimize V 1 (X 1 ) + V 2 (X 2 ) +... s.t. V 1 (X 1 ) + V 2 (X 2 ) +...  V 1 (Y 1 ) + V 2 (Y 2 ) +... + R 1 + R 2 +... One variable V i (X i ) for each state of each subsys One constraint for every state and action  V i, Q i depend on small sets of variables/actions Generates polynomially-sized LPs for factored MDPs [Guestrin et al. ‘01]

12 Overview of Algorithm Each subsystem solves a local (stand-alone) MDP Each subsystem computes messages by solving a simple local LP: Sends `constraint message’ to its parent Sends `reward messages’ to its children Repeat until convergence MjMj MkMk …… …… MlMl Reward message Reward message Constraint message Constraint message

13 Stand-alone MDPs and Reward Messages State – (X j, Y j ) Actions – A j Rewards – R j (X j, Y j, A j ) Transitions – P j (X j ’ | X j, Y j, A j ) Subsystem MDP Reward messages S j from parent S k to children State – X j Actions – (A j, Y j ) Rewards – R j (X j, Y j, A j ) – S j +  k S k Transitions – P j (X j ’ | X j, Y j, A j ) Stand-alone MDP Reward messages are over SepSets Solve stand-alone MDP using any algorithm Obtain visitation frequencies of resulting policy:  j = discounted frequency of visits to each state-action

14 Visitation Frequencies Dual Discounted frequency of visits to each state action pairs: Subsystems must agree on the frequency for shared variables ! reward messages Approx. ! relaxed enforcement of constraints M 2 Speed control  I GS

15 Overview of Algorithm: Detailed MjMj MkMk …… …… MlMl Each subsystem solves a local (stand-alone) MDP Compute local visitation frequencies  j Add constraint to reward message LP Each subsystem computes messages by solving a simple local LP: Sends `constraint message’ to its parent – visitation frequencies for SepSet variables Sends `reward messages’ to its children Repeat until convergence

16 Reward Message LP Dual LP yields reward messages S k for children Dual yields mixing weights p j, p k  enforce consistent frequencies

17 Computing Reward Messages Rows of  jj and L j correspond to visitation frequencies and value of each policy visited by M j Rows of  jk are frequencies marginalized to SepSet[M k ] Messages: Dual of reward message LP generates mixed policies pj and pk are mixing parameters, force parents and children to agree on visitation of SepSet

18 Convergence Result Planning algorithm is a special case of nested Benders decomposition One Benders split for each internal node N of subsystem tree One subproblem is N itself Remaining subproblems are subtrees for N’s children (decompose these recursively) Master prob is to determine reward messages Result follows from correctness of Benders decomposition MjMj MlMl Reward message Constraint message In finite number of iterations, algorithm produces best possible value function (ie, same as centralized planner)

19 Hierarchical Action Selection MjMj MkMk …… …… MlMl Action choice Action choice Value of conditional policy Value of conditional policy Distributed planning obtains value function Distributed message passing obtains action choice (policy) Sends conditional value to its parent Sends action choice to its children Limited observability Limited communication

20 Reusing Models and Computation Classes of objects Basic subsystems with same rewards and transitions Reuse in knowledge representation Library of subsystems Reusing computation Compute policy (visitation frequencies) for one subsystem, use it in all subsystems of the same class Compute messages for one subtree, use them in all equivalent subtrees

21 Related Work Serial decompositions one subsystem “active” at a time Kushner & Chen ’74 (rooms in a maze) Dean & Lin, IJCAI-95 (combines w/ abstraction) hierarchical is similar (MAXQ, HAM, etc.) Parallel decompositions more expressive (exponentially larger state space) Singh & Cohn, NIPS-98 (enumerates states) Meuleau et al., AAAI-98 (heuristic for resources)

22 Related Work Dantzig-Wolfe or Benders decomposition Dantzig ’65 first used for MDPs in Kushner & Chen ’74 we are first to apply to parallel subsystems Variable elimination well-known from Bayes nets Guestrin, Koller & Parr NIPS-01

23 Summary – Hierarchical Factored MDPs Parallel decomposition ! Exponential state space Efficient distributed planning algorithm Solve local stand-alone MDPs with any algorithm Reward sharing coordinate subsystem plans Simple message passing algorithm computes rewards Hierarchical action selection Limited communication Limited observability Reuse for knowledge representation and computation General approach for modeling and planning in large stochastic systems


Download ppt "Distributed Planning in Hierarchical Factored MDPs Carlos Guestrin Stanford University Geoffrey Gordon Carnegie Mellon University."

Similar presentations


Ads by Google