Fast Strong Planning for FOND Problems with Multi-Root DAGs Andres Calderon Jaramillo - Dr. Jicheng Fu Department of Computer Science, University of Central.

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

Heuristic Search techniques
Ch 4. Heuristic Search 4.0 Introduction(Heuristic)
Traveling Salesperson Problem
Top 5 Worst Times For A Conference Talk 1.Last Day 2.Last Session of Last Day 3.Last Talk of Last Session of Last Day 4.Last Talk of Last Session of Last.
Probabilistic Planning (goal-oriented) Action Probabilistic Outcome Time 1 Time 2 Goal State 1 Action State Maximize Goal Achievement Dead End A1A2 I A1.
~1~ Infocom’04 Mar. 10th On Finding Disjoint Paths in Single and Dual Link Cost Networks Chunming Qiao* LANDER, CSE Department SUNY at Buffalo *Collaborators:
Constraint Programming for Compiler Optimization March 2006.
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Part2 AI as Representation and Search
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Best-First Search: Agendas
CPSC 322, Lecture 4Slide 1 Search: Intro Computer Science cpsc322, Lecture 4 (Textbook Chpt ) January, 12, 2009.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
Handling non-determinism and incompleteness. Problems, Solutions, Success Measures: 3 orthogonal dimensions  Incompleteness in the initial state  Un.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces Kavraki, Svestka, Latombe, Overmars 1996 Presented by Chris Allocco.
Distributed Constraint Optimization Michal Jakob Agent Technology Center, Dept. of Computer Science and Engineering, FEE, Czech Technical University A4M33MAS.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
1 State Space of a Problem Lecture 03 ITS033 – Programming & Algorithms Asst. Prof.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
A* Lasso for Learning a Sparse Bayesian Network Structure for Continuous Variances Jing Xiang & Seyoung Kim Bayesian Network Structure Learning X 1...
Computing & Information Sciences Kansas State University Lecture 9 of 42 CIS 530 / 730 Artificial Intelligence Lecture 9 of 42 William H. Hsu Department.
Computer Science CPSC 322 Lecture 13 Arc Consistency (4.5, 4.6 ) Slide 1.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Christopher Moh 2005 Competition Programming Analyzing and Solving problems.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Lecture 2: 11/4/1435 Problem Solving Agents Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Compact Encodings for All Local Path Information in Web Taxonomies with Application to WordNet Svetlana Strunjaš-Yoshikawa Joint with Fred Annexstein and.
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Complexity & Computability. Limitations of computer science  Major reasons useful calculations cannot be done:  execution time of program is too long.
Arc Consistency CPSC 322 – CSP 3 Textbook § 4.5 February 2, 2011.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Robust Planning using Constraint Satisfaction Techniques Daniel Buettner and Berthe Y. Choueiry Constraint Systems Laboratory Department of Computer Science.
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Solving problems by searching A I C h a p t e r 3.
February 11, 2016Introduction to Artificial Intelligence Lecture 6: Search in State Spaces II 1 State-Space Graphs There are various methods for searching.
Generalized Point Based Value Iteration for Interactive POMDPs Prashant Doshi Dept. of Computer Science and AI Institute University of Georgia
Fast Comprehensive Planner for Fully Observable Nondeterministic Problems Andres Calderon Jaramillo – Faculty Advisor: Dr. Jicheng Fu Department of Computer.
Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B.
Constraint Programming for the Diameter Constrained Minimum Spanning Tree Problem Thiago F. Noronha Celso C. Ribeiro Andréa C. Santos.
Shortcomings of Traditional Backtrack Search on Large, Tight CSPs: A Real-world Example Venkata Praveen Guddeti and Berthe Y. Choueiry The combination.
PART-2 CSC 450-AI by Asma Tabuk 1 CSC AI Informed Search Algorithms College of Computer and Information Technology Department of Computer.
Multiple-goal Search Algorithms and their Application to Web Crawling Dmitry Davidov and Shaul Markovitch Computer Science Department Technion, Haifa 32000,
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Dept. Computer Science, Korea Univ. Intelligent Information System Lab A I (Artificial Intelligence) Professor I. J. Chung.
Iterative Deepening A*
Two-Player Games A4B33ZUI, LS 2016
WELCOME TO COSRI IBADAN
Abstraction Transformation & Heuristics
Informed Search and Exploration
Two-player Games (2) ZUI 2013/2014
Artificial Intelligence (CS 370D)
Empirical Comparison of Preprocessing and Lookahead Techniques for Binary Constraint Satisfaction Problems Zheying Jane Yang & Berthe Y. Choueiry Constraint.
CS 188: Artificial Intelligence
Finding Heuristics Using Abstraction
CSE 4705 Artificial Intelligence
Informed search algorithms
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Haskell Tips You can turn any function that takes two inputs into an infix operator: mod 7 3 is the same as 7 `mod` 3 takeWhile returns all initial.
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Depth-First Searches.
Presentation transcript:

Fast Strong Planning for FOND Problems with Multi-Root DAGs Andres Calderon Jaramillo - Dr. Jicheng Fu Department of Computer Science, University of Central Oklahoma We present a planner for addressing a difficult, yet under-investigated class of planning problems: Fully Observable Non-Deterministic (FOND) planning problems with strong solutions. Our strong planner implements two novel ideas. First, we employ a new data structure, MRDAG (multi-root directed acyclic graph), to define how the solution space should be expanded. We further equip a MRDAG with two heuristics to ensure planning towards the relevant search direction. Results show that our strong algorithm achieves impressive performance on a variety of benchmark problems: it runs more than three orders of magnitude faster than MBP and Gamer and demonstrates significantly better scalability. ABSTRACT In its broadest terms, artificial intelligence planning deals with the problem of designing algorithms to find a plan in order to achieve a goal under certain constraints. In this context, a domain is a structure that describes the possible actions that can be used in finding a plan. A planning problem for a given domain specifies the initial state of a system and a set of goals to achieve. A planner is an algorithm that solves a planning problem by finding a suitable set of actions in the domain to take the system from the initial state to at least one goal state. FOND problems assume that each state in a system can be fully observed and that the actions in the domain may have more than one possible outcome (non-determinism). Solutions can be classified in three categories [Cimatti et al., 2003]: weak plans, strong cyclic plans, and strong plans. See Figure 1 and Figure 2. BACKGROUND Our planner finds a strong plan if one exists. At each stage, states with a single applicable action are expanded until states with more than one applicable action are encountered. A set of actions is then selected to be applied to the latter. OUR PLANNER AC B pick-up(B, A) put-down(B) Figure 2. Example of a simple strong plan. The action pick-up(x, y) is non-deterministic as it can succeed or fail (block x may fall on the table). The action put- down(x) is deterministic. Initial State Goal … … … Figure 3. Expansion of the solution space. This graph illustrates how MRDAGs are structured and expanded. Dark green nodes are roots of a MRDAG. Light green nodes are states with exactly one applicable action. Initial State Goal MRDAG 2 MRDAG 1 MRDAG 3 MRDAG 4 Figure 1(a). A weak plan. There is at least one successful path to the goal. Figure 1(b). A strong cyclic plan. Plan may use actions that can cause cycles but will likely succeed eventually. Figure 1(c). A strong plan. Goal is achieved from any state without using actions that cause cycles. Initial State Goal Initial State Goal Initial State Goal The procedure continues until the only non-expanded states are goal states, in which case a strong plan is returned. If dead-ends are encountered, the algorithm backtracks to a previous stage. If the algorithm has to backtrack from the initial state, a strong plan can not exist. At each expansion, the planner checks that no cycle is produced. Each stage produces a multi-root directed acyclic graph (MRDAG), where the roots of the graph are the states with more than one applicable action. See Figure 3. We use two heuristics to inform our planner: Most-Constrained State (MCS): expands states with fewer applicable actions first. Least-Heuristics Distance (LHD): uses applicable actions with the least estimated distance to the goal first. OUR PLANNER (CONT’D) AC B ACB Among the planners that are capable of solving strong FOND problems, the two that are most well-known are arguably MBP [Cimatti et al., 2003] and Gamer [Kissmann and Edelkamp, 2009]. We used domains derived from the FOND track of the 2008 International Planning Competition [Bryce and Buffet, 2008]. Gamer outperformed MBP in all domains. Nevertheless, our planner could perform 2 to 4 orders of magnitude faster than Gamer. EVALUATION [Bryce and Buffet, 2008] Daniel Bryce and Olivier Buffet. International Planning Competition Uncertainty Part: Benchmarks and Results, In Proceedings of International Planning Competition, [Cimatti et al., 2003] Alessandro Cimatti, Marco Pistore, Marco Roveri, and Paolo Traverso. Weak, strong, and strong cyclic planning via symbolic model checking, Artificial Intelligence, 147(1-2):35–84, [Kissmann and Edelkamp, 2009] Peter Kissmann and Stefan Edelkamp. Solving Fully-Observable Non-Deterministic Planning Problems via Translation into a General Game, In Proceedings of the 32nd Annual German Conference on Advances in Artificial Intelligence (KI'09), pages 1–8, Berlin, Heidelberg: Springer-Verlag. REFERENCES