Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B.

Similar presentations


Presentation on theme: "Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B."— Presentation transcript:

1 Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B. Bastani, and I-Ling Yen - University of Texas at Dallas We present a planner for addressing a difficult, yet under-investigated class of planning problems: Fully Observable Non-Deterministic planning problems with strong solutions. Our strong planner employs a new data structure, MRDAG (multi-root directed acyclic graph), to define how the solution space should be expanded. We further equip a MRDAG with heuristics to ensure planning towards the relevant search direction. We performed extensive experiments to evaluate MRDAG and the heuristics. Results show that our strong algorithm achieves impressive performance on a variety of benchmark problems: on average it runs more than three orders of magnitude faster than the state-of-the-art planners, MBP and Gamer, and demonstrates significantly better scalability. ABSTRACT In its broadest terms, artificial intelligence planning deals with the problem of designing algorithms to find a plan in order to achieve a goal under certain constraints. In this context, a domain is a structure that describes the possible actions that can be used in finding a plan. A planning problem for a given domain specifies the initial state of a system and a set of goals to achieve. A planner is an algorithm that solves a planning problem by finding a suitable set of actions in the domain to take the system from the initial state to at least one goal state. FOND problems assume that each state in a system can be fully observed and that some actions in the domain may have more than one possible outcome (non-determinism). Solutions can be classified in three categories [Cimatti et al., 2003]: weak plans, strong cyclic plans, and strong plans. See Figure 1 and Figure 2. BACKGROUND Our planner finds a strong plan if one exists. At each stage, states with a single applicable action are expanded until states with more than one applicable action are encountered. A set of actions is then selected to be applied to the latter set. The procedure continues until the only non- expanded states are goal states, in which case a strong plan is returned. If dead-ends are encountered, the algorithm backtracks to a previous stage. If the algorithm has to backtrack from the initial state, a strong plan can not exist. At each expansion, the planner checks that no cycle is produced. Each stage produces a multi-root directed acyclic graph (MRDAG), where the roots of the graph are the states with more than one applicable action. See Figure 3. We use two heuristics to inform our planner: Most Constrained State (MCS): expands states with fewer applicable actions first. Least Heuristic Distance (LHD): uses applicable actions with the least estimated distance to the goal first. OUR PLANNER Figure 1(a). A weak plan. There is at least one successful path to the goal. Figure 1(b). A strong cyclic plan. Plan may use actions that can cause cycles but will likely succeed eventually. Figure 1(c). A strong plan. Goal is achieved from any state without using actions that cause cycles. Initial State Goal Initial State Goal Initial State Goal AC B pick-up(B, A) put-down(B) Figure 2. Example of a simple strong plan. The action pick-up(x, y) is non-deterministic as it can succeed or fail (block x may fall on the table). The action put-down(x) is deterministic. Initial State Goal … … … Figure 3. Expansion of the solution space. This graph illustrates how MRDAGs are structured and expanded. Dark green nodes are roots of a MRDAG. Light green nodes are states with exactly one applicable action. Initial State Goal MRDAG 2 MRDAG 1 MRDAG 3 MRDAG 4 AC B ACB Among the planners that are capable of solving strong FOND problems, the two that are most well-known are arguably MBP [Cimatti et al., 2003] and Gamer [Kissmann and Edelkamp, 2009]. We used domains derived from the FOND track of the 2008 International Planning Competition [Bryce and Buffet, 2008]. Gamer outperformed MBP in all domains. Nevertheless, our planner could perform 2 to 4 orders of magnitude faster than Gamer with comparable plan sizes in most cases. EVALUATION [Bryce and Buffet, 2008] Daniel Bryce and Olivier Buffet. International Planning Competition Uncertainty Part: Benchmarks and Results, In Proceedings of International Planning Competition, 2008. [Cimatti et al., 2003] Alessandro Cimatti, Marco Pistore, Marco Roveri, and Paolo Traverso. Weak, strong, and strong cyclic planning via symbolic model checking, Artificial Intelligence, 147(1-2):35– 84, 2003. [Kissmann and Edelkamp, 2009] Peter Kissmann and Stefan Edelkamp. Solving Fully-Observable Non-Deterministic Planning Problems via Translation into a General Game, In Proceedings of the 32nd Annual German Conference on Advances in Artificial Intelligence (KI'09), pages 1–8, Berlin, Heidelberg: Springer-Verlag. REFERENCES


Download ppt "Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B."

Similar presentations


Ads by Google