Midterm Review Prateek Tandon, John Dickerson. Basic Uninformed Search (Summary) b = branching factor d = depth of shallowest goal state m = depth of.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Lecture 11 Last Time: Local Search, Constraint Satisfaction Problems Today: More on CSPs.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
This lecture topic (two lectures) Chapter 6.1 – 6.4, except 6.3.3
Iterative Deepening A* & Constraint Satisfaction Problems Lecture Module 6.
Artificial Intelligence Constraint satisfaction problems Fall 2008 professor: Luigi Ceccaroni.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Review: Constraint Satisfaction Problems How is a CSP defined? How do we solve CSPs?
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
Chapter 4 Search Methodologies.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Artificial Intelligence Chapter 11: Planning
Constraint Satisfaction Problems
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Machine Learning and Review Reading: C Bayesian Approach  Each observed training example can incrementally decrease or increase probability of.
Chapter 5 Outline Formal definition of CSP CSP Examples
Informed search methods Tuomas Sandholm Computer Science Department Carnegie Mellon University Read Chapter 4 of Russell and Norvig.
State-Space Searches.
Constraint Satisfaction Problems
1 Constraint Satisfaction Problems Slides by Prof WELLING.
CPS 270: Artificial Intelligence More search: When the path to the solution doesn’t matter Instructor: Vincent.
Constraint Satisfaction Problems Chapter 6. Review Agent, Environment, State Agent as search problem Uninformed search strategies Informed (heuristic.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Intro to AI, Fall Introduction to Artificial Intelligence LECTURE 4: Informed Search What is informed search? Best-first.
Constraint Satisfaction CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Methods Heuristic = “to find”, “to discover” “Heuristic” has many meanings in general How to come up with mathematical proofs Opposite.
Chapter 5: Constraint Satisfaction ICS 171 Fall 2006.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
1 Chapter 5 Constraint Satisfaction Problems. 2 Outlines  Constraint Satisfaction Problems  Backtracking Search for CSPs  Local Search for CSP  The.
Advanced Artificial Intelligence Lecture 2: Search.
Chapter 5 Constraint Satisfaction Problems
MDPs (cont) & Reinforcement Learning
Review Test1. Robotics & Future Technology Future of Intelligent Systems / Ray Kurzweil futurist Ray Kurzweil / A Long Bet A Long Bet / Robot Soccer.
Constraint Satisfaction Problems (Chapter 6)
Informed search methods Tuomas Sandholm Computer Science Department Carnegie Mellon University Read Section of Russell and Norvig.
1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3 Grand Challenge:
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
Constraint Satisfaction Problems University of Berkeley, USA
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Dr. Shazzad Hosain Department of EECS North South University Lecture 01 – Part C Constraint Satisfaction Problems.
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
1 Constraint Satisfaction Problems (CSP). Announcements Second Test Wednesday, April 27.
Informed Search Methods Tuomas Sandholm Computer Science Department Carnegie Mellon University Read Sections of Russell and Norvig.
Instructor: Vincent Conitzer
Artificial Intelligence Problem solving by searching CSC 361
Class #17 – Thursday, October 27
Planning José Luis Ambite.
Lecture 1B: Search.
Graphplan/ SATPlan Chapter
EA C461 – Artificial Intelligence
Constraint Satisfaction Problems
Class #19 – Monday, November 3
Constraint satisfaction problems
Constraint Satisfaction Problems. A Quick Overview
Graphplan/ SATPlan Chapter
Constraint Satisfaction
Graphplan/ SATPlan Chapter
CS 8520: Artificial Intelligence
Constraint Satisfaction Problems
Constraint satisfaction problems
Presentation transcript:

Midterm Review Prateek Tandon, John Dickerson

Basic Uninformed Search (Summary) b = branching factor d = depth of shallowest goal state m = depth of the search space l = depth limit of the algorithm

CSP Solving - Backtracking search Depth-first search for CSPs with single-variable assignments is called backtracking search Improvements: – Most constrained variable/Minimum Remaining Values – choose the variable with the fewest legal values – Least constraining variable – choose the variable that rules out the fewest values in the remaining value – Forward checking – Keep track of remaining legal values and terminate when a variable has no remaining legal values – Arc Consistency (AC3) – propagate information across arcs – Conflict-Directed Backjumping – maintain a conflict set and backjump to a variable that might help resolve the conflict

A* Search function A*-SEARCH (problem) returns a solution or failure return BEST-FIRST-SEARCH (problem, g+h) f(n) = estimated cost of the cheapest solution through n = g(n) + h(n)

A* Search… In a minimization problem, an admissible heuristic h(n) never overestimates the real value (In a maximization problem, h(n) is admissible if it never underestimates) Best-first search using f(n) = g(n) + h(n) and an admissible h(n) is known as A* search A* tree search is complete & optimal

Iterative Deepening A* (IDA*) function IDA*(problem) returns a solution sequence inputs: problem, a problem static: f-limit, the current f-COST limit root, a node root  MAKE-NODE(INITIAL-STATE[problem]) f-limit  f-COST(root) loop do solution, f-limit  DFS-CONTOUR(root,f-limit) if solution is non-null then return solution if f-limit =  then return failure; end function DFS-CONTOUR(node,f-limit) returns a solution sequence and a new f-COST limit inputs: node, a node f-limit, the current f-COST limit static: next-f, the f-COST limit for the next contour, initially  if f-COST[node] > f-limit then return null, f-COST[node] if GOAL-TEST[problem](STATE[node]) then return node, f-limit for each node s in SUCCESSOR(node) do solution, new-f  DFS-CONTOUR(s,f-limit) if solution is non-null then return solution, f-limit next-f  MIN(next-f, new-f); end return null, next-f f-COST[node] = g[node] + h[node]

A* vs. IDA* Map of Romania showing contours at f = 380, f = 400 and f = 420, with Arad as the start sate. Nodes inside a given contour have f-costs lower than the contour value.

LP, IP, MIP, WDP, etc … Topics you should know at a high level: LP: visual representation of simplex (M)IP: Branch and cut (what are cuts? Why do we use them?) Cuts should separate LP optimum from integer points Gomory cuts: Topics you should know well: Formulating a combinatorial search problem as an IP/MIP (think HW2, P2) (M)IP: Branch and bound (upper bounds, lower bounds, proving optimality) Principle of least commitment (stay flexible)

Planning Review STRIPS – basic representation Linear Planning – work on one goal at a time. Solve goal completely before moving onto the next one. – Reduces search space since goals are solved one at time. – But this leads to incompleteness [Sussman Anomaly] – Planner’s efficiency is sensitive to goal orderings – Concrete implementation as an algorithm: GPS [look over example in slides] Partial-Order Planning – only constrain the ordering in the problem only as much as you need to at the current moment. – Sound and complete whereas Linear Planning is only sound Graph plan – try to “preprocess” the search using a planning graph SatPlan – generate boolean SAT formula for plan – What was the limitation?

Planning Graph Adds a level until either a solution is found by EXTRACT-SOLUTION [either CSP or backwards search] or no solution exists.

Mutex Rules for Actions Mutex between two actions at a given level: – Inconsistent effects: One action negates the effect of the other – Interference: One of the effects of an action is the negation of a precondition of the other – Competing needs: One of the preconditions of one action is mutually exclusive with a precondition of the other. A2 A1

Mutex Rules for Literals Literals negation of the other [easy] Inconsistent support – if each possible pair of actions from the prior action graph level that could achieve the two literals is mutually exclusive. – Check to see if pairs of actions that produce literals are mutex on the past action level. – Look at Book example