A Guaranteed Bidirectional Search

Slides:



Advertisements
Similar presentations
BEST FIRST SEARCH - BeFS
Advertisements

Review: Search problem formulation
An Introduction to Artificial Intelligence
Inconsistent Heuristics
Traveling Salesperson Problem
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture– 4, 5, 6: A* properties 9 th,10 th and 12 th January,
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
State Space Search Algorithms CSE 472 Introduction to Artificial Intelligence Autumn 2003.
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
Review: Search problem formulation
Problem Solving and Search in AI Heuristic Search
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Dijkstra’s Algorithm: Notes to Complement.
Informed Search Idea: be smart about what paths to try.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Computer Science CPSC 322 Lecture A* and Search Refinements (Ch 3.6.1, 3.7.1, 3.7.2) Slide 1.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Monday Read chapter 4, section 1 No homework..
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Artificial Intelligence Problem solving by searching.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
State space representations and search strategies - 2 Spring 2007, Juris Vīksna.
Informed Search CSE 473 University of Washington.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Searching for Solutions
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3: Search, A*
Solving problems by searching Uninformed search algorithms Discussion Class CS 171 Friday, October, 2nd (Please read lecture topic material before and.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Traveling Salesperson Problem
CSC317 Shortest path algorithms
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Heuristic Functions.
Abstraction Transformation & Heuristics
Artificial Intelligence (CS 370D)
Artificial Intelligence Problem solving by searching CSC 361
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
CSE 373: Data Structures and Algorithms
Lecture 8 Heuristic search Motivational video Assignment 2
Artificial Intelligence
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
BEST FIRST SEARCH -OR Graph -A* Search -Agenda Search CSE 402
CS621: Artificial Intelligence
CSE 473 University of Washington
Algorithms (2IL15) – Lecture 7
Chap 4: Searching Techniques
Informed Search Idea: be smart about what paths to try.
State-Space Searches.
Heuristic Search Generate and Test Hill Climbing Best First Search
State-Space Searches.
CS621: Artificial Intelligence
Artificial Intelligence
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

A Guaranteed Bidirectional Search Presented by Allen Bates and Daniel Ruegamer

Before we begin, here are the authors Bidirectional Search That Is Guaranteed to Meet in the Middle is written by: Robert C. Holte from the Computing Science Department of the University of Alberta Ariel Felner from the ISE Department of the Ben-Gurion University Guni Sharon from the ISE Department of the Ben-Gurion University Nathan R. Sturtevant from the Computer Science Department of the University of Denver

Meeting in the Middle MM is the first bidirectional heuristic search algorithm whose forward and backward searches are guaranteed to never expand a node beyond the solution midpoint, under all circumstances. MM never expands a node whose f-value exceeds C*. MM returns C* If there exists a path from start to goal and MM’s heuristics are consistent MM never expands a state twice.

Corollary 14 Forward search never expands a node n with fF(n) > C* or gF(n) > C* / 2 Backward search never expands a node n with fB(n) > C* or gB(n) > C* / 2 If a path doesn’t exist from start to goal, then C* will have a value of infinity. If there is a path, the forward search will never expand a node when C > C* The priority node pr(n) determines the node that is chosen to expand in a search If a node is expanded in the forward search, prF(n) <= C*. prF(n) = max(fF(n), 2*gF(n)), therefore both fF(n) and 2*gF(n) <= C*

Lemma 4 For any finite state-space S with non-negative edge weights MM halts for any start and goal states in S. Since MM never expands a node in the same path twice, and since there aren’t any negative-cost cycles, MM will never expand a node in a path containing a cycle. Therefore, each state can only be expanded until that path is permanently closed. Since each iteration expand a a node in one of the search paths, after a finite number of iterations, MM will permanently close all the nodes reachable. If there is no path from start to goal, MM returns infinity. Our algorithm checks to see if a node in the Open Set for Backward Search. If there is no path from start to goal, then there will be no node within this set. Therefore, our cheapest solution will never be updated and maintain its initial cost of infinity

Theorem 16 If there exists a path from start to goal MM returns U = C* MM will never terminate if both the Open and Closed Set are empty MM cannot terminate if C < C* since U cannot be smaller than C* Therefore, MM will reach an iteration where C >= C* If C > C*, MM will return U = C* If C=C*, then U <= C=C* so U = C* is returned If a path does exist from start to goal, then that means that the Open and Closed sets must contain elements for presenting that path. MM will find this path.

Theorem 23 Suppose MM’s heuristics are consistent and MM does not re-open closed nodes. There if there exists a path from start to goal, then MM never expands a state twice. MM will not expand a state twice because once a state becomes closed, it is permanently closed. The only way a state is expanded twice is when that state is expanded once in the forward direction and once in the backward direction. However, if a node is in the open lists for the Forward and Backward Search, U will equal C*, the cost of the path of the node. The algorithm will then check if U <= C* and terminate before expanding the node for a second time.

Definitions 1st letter is the distance from the start, the 2nd letter is the distance from the goal N is Near, F is Far, R is Remote State s is ‘near to goal’ if d(s, goal) <= C* / 2 State s is ‘remote’ if d(start, s) > C* State s is ‘far from goal’ otherwise

Definitions A* < MM indicates that A* expands less nodes than MM N’ indicates that MM might not expand all the nodes that are near to start. N’F < NF N’FU is the unpruned part of N’F. The heuristic, h, splits each region into two parts: pruned states and unpruned states. FN’B has the same implication as FN’U except that B is based on hB the heuristic of MM’s backward search.

Comparisons : MM vs A* MM’s equation : MM = N’FU + N’N’U + FN’B + RN’B A* equation : NFU + NNU + FNU + FFU By definition, N’FU <= NFU and N’N’U <= NN, therefore MM has an advantage in NF and NN For the FN region, A* uses a forward search heuristic that estimates a small distance of, at most, C* / 2. However, MM’s uses a backward search heuristic that estimates a distance larger than C* /2. A* has an advantage in FN FFU vs RNB : If RN is smaller than FF but hF is accurate enough to make FFU smaller than MMo’s RN’, then hB is accurate enough to prune in RN. Therefore, FFU > RNB whenever RN is much smaller than FF. Therefore, MM will have an advantage in this case.

Comparisons : MM vs MM0 MM’s equation : MM = N’FU + N’N’U + FN’B + RN’B MM0 ‘s equation : MM0 = N’F + N’N’ + FN’ + RN’ MM0 can expand strictly fewer nodes than MM with a consistent, non-zero heuristic.

Definitions Problem instances are pairs (start, goal) of states in a state space in which all edge weights are non-negative A state is an immutable element of a state-space, with a fixed distance to start and goal D(u,v) is the distance (cost of least-cost path) from node u to node v F(n) = g(n) + h(n) where n is the last node G(n) is the cost to n from the start node H(n) is the heuristic value for node n C* is the cost of the optimal solution path C is the minimum priority of all the nodes in both Open lists and nF Є OpenF U is the cost of the cheapest solution found so far.

Definitions A node is a mutable element created and updated by a search algorithm, representing a path (or paths) in the state-space Forward Search is subscripted with F Backward Search is subscripted with B Each search direction uses an admissible (it doesn’t overestimate the cost from a node to the goal, i.e. h(n) <= actual cost)

Pseudocode

Example

Example

Questions and Answering

References Robert C. Holte, Ariel Felner, Guni Sharon, and Nathan R. Sturtevant. "Bidirectional Search that is Guaranteed to Meet in the Middle." Proceedings of the 30th AAAI Conference on Artificial Intelligence. 2016