CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lectures 18, 19, 20– A* Monotonicity 2 nd, 6 th and 7 th September, 2010.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 2 - Search.
CS344: Principles of Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 7, 8, 9: Monotonicity 16, 17 and 19 th Jan, 2012.
An Introduction to Artificial Intelligence
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture– 4, 5, 6: A* properties 9 th,10 th and 12 th January,
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15, 16, 17- Completeness Proof; Self References and.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture–2,3: A* 3 rd and 5 th January, 2012.
CSC344: AI for Games Lecture 4: Informed search
Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 1 - Introduction.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Monotonicity 13 th Jan, 2011.
Informed State Space Search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Informed search algorithms
Informed search algorithms
CS 344 Artificial Intelligence By Prof: Pushpak Bhattacharya Class on 15/Jan/2007.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
CSC3203: AI for Games Informed search (1) Patrick Olivier
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 2 - Search.
Searching for Solutions
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 13– Search 17 th August, 2010.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.
CS344: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 1 to 8– Introduction, Search, A* (has the associated lab course: CS386)
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3: Search, A*
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Artificial Intelligence Problem solving by searching CSC 361
CS621: Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CSE 4705 Artificial Intelligence
CS 4100 Artificial Intelligence
Week 4 Jan 29th 2016.
Informed search algorithms
Informed search algorithms
Artificial Intelligence
CS621: Artificial Intelligence
CS621: Artificial Intelligence
Artificial Intelligence Chapter 9 Heuristic Search
(1) Breadth-First Search  S Queue S
CSE 473 University of Washington
CS621 : Artificial Intelligence
CS621: Artificial Intelligence
Artificial Intelligence
Informed Search.
Presentation transcript:

CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lectures 18, 19, 20– A* Monotonicity 2 nd, 6 th and 7 th September, 2010

A* Algorithm – Definition and Properties f(n) = g(n) + h(n) The node with the least value of f is chosen from the OL. f*(n) = g*(n) + h*(n), where, g*(n) = actual cost of the optimal path (s, n) h*(n) = actual cost of optimal path (n, g) g(n) ≥ g*(n) By definition, h(n) ≤ h*(n) S s n goal State space graph G g(n) h(n)

Summary on Admissibility 1. A* algorithm halts 2. A* algorithm finds optimal path 3. If f(n) < f*(S) then node n has to be expanded before termination 4. If A* does not expand a node n before termination then f(n) >= f*(S)

Monotonicity

Definition A heuristic h(p) is said to satisfy the monotone restriction, if for all ‘p’, h(p)<=h(p c )+cost(p, p c ), where ‘p c ’ is the child of ‘p’.

Theorem If monotone restriction (also called triangular inequality) is satisfied, then for nodes in the closed list, redirection of parent pointer is not necessary. In other words, if any node ‘n’ is chosen for expansion from the open list, then g(n)=g(n * ), where g(n) is the cost of the path from the start node ‘s’ to ‘n’ at that point of the search when ‘n’ is chosen, and g(n * ) is the cost of the optimal path from ‘s’ to ‘n’

Grounding the Monotone Restriction nglgl h(n) -: number of displaced tiles Is h(n) monotone ? h(n) = 8 h(n’) = 8 C(n,n’) = 1 Hence monotone n’

Monotonicity of # of Displaced Tile Heuristic h(n) < = h(n’) + c(n, n’) Any move reduces h(n) by at most 1 c = 1 Hence, h(parent) < = h(child) + 1 If the empty cell is also included in the cost, then h need not be monotone (try!)

Monotonicity of Manhattan Distance Heuristic (1/2) Manhattan distance= X-dist+Y-dist from the target position Refer to the diagram in the first slide: h mn (n) = = 10 h mn (n’) = = 11 Cost = 1 Again, h(n) < = h(n’) + c(n, n’)

Monotonicity of Manhattan Distance Heuristic (2/2) Any move can either increase the h value or decrease it by at most 1. Cost again is 1. Hence, this heuristic also satisfies Monotone Restriction If empty cell is also included in the cost then manhattan distance does not satisfy monotone restriction (try!) Apply this heuristic for Missionaries and Cannibals problem

Relationship between Monotonicity and Admissibility Observation: Monotone Restriction → Admissibility but not vice-versa Statement: If h(n i ) <= h(n j ) + c(n i, n j ) for all i, j then h(n i ) < = h*(n i ) for all i

Proof of Monotonicity  admissibility Let us consider the following as the optimal path starting with a node n = n 1 – n 2 – n 3 … n i - … n m = g l Observe that h*(n) = c(n 1, n 2 ) + c(n 2,n 3 ) + … + c(n m-1, g i ) Since the path given above is the optimal path from n to g l Now, h(n 1 ) <= h(n 2 ) + c(n 1, n 2 ) Eq 1 h(n 2 ) <= h(n 3 ) + c(n 2, n 3 ) Eq 2 :::::: h(n m-1 ) = h(g i ) + c(n m-1, g i ) Eq (m-1) Adding Eq 1 to Eq (m-1) we get h(n) <= h(g l ) + h*(n) = h*(n) Hence proved that MR → (h <= h*)

Proof (continued…) Counter example for vice-versa h*(n 1 ) = 3 h(n 1 ) = 2.5 h*(n 2 ) = 2 h(n 2 ) = 1.2 h*(n 3 ) = 1 h(n 3 ) = 0.5 :: h*(g l ) = 0 h(g l ) = 0 h < h* everywhere but MR is not satisfied n1n1 n2n2 n3n3 glgl :

Let S-N 1 - N 2 - N 3 - N 4... N m …N k be an optimal path from S to N k (all of which might or might not have been explored). Let N m be the last node on this path which is on the open list, i.e., all the ancestors from S up to N m-1 are in the closed list. For every node N p on the optimal path, g*(N p )+h(N p )<= g*(N p )+C(N p,N p+1 )+h(N p+1 ), by monotone restriction g*(N p )+h(N p )<= g*(N p+1 )+h(N p+1 ) on the optimal path g*(N m )+ h(N m )<= g*(N k )+ h(N k ) by transitivity Since all ancestors of N m in the optimal path are in the closed list, g (N m )= g*(N m ). => f(N m )= g(N m )+ h(N m )= g*(N m )+ h(N m )<= g*(N k )+ h(N k ) Proof of MR leading to optimal path for every expanded node (1/2)

Now if N k is chosen in preference to N m, f(N k ) <= f(N m ) g(N k )+ h(N k ) <= g(N m )+ h(N m ) = g*(N m )+ h(N m ) <= g*((N k )+ h(N k ) g(N k )<=g*(N k ) But g(N k )>=g*(N k ), by definition Hence g(N k )=g*(N k ) This means that if N k is chosen for expansion, the optimal path to this from S has already been found TRY proving by induction on the length of optimal path Proof of MR leading to optimal path for every expanded node (2/2)

Monotonicity of f() values Statement: f values of nodes expanded by A* increase monotonically, if h is monotone. Proof: Suppose n i and n j are expanded with temporal sequentiality, i.e., n j is expanded after n i

Proof (1/3)… n i expanded before n j n i and n j co-existing n j comes to open list as a result of expanding n i and is expanded immediately n j ’s parent pointer changes to n i and expanded n j expanded after n i Possible cases for rigorous proof

Proof (2/3)… All the previous cases are forms of the following two cases (think!) CASE 1: n j was on open list when n i was expanded Hence, f(n i ) <= f(n j ) by property of A* CASE 2: n j comes to open list due to expansion of n i

Proof (3/3)… nini njnj Case 2: f(n i ) = g(n i ) + h(n i )(Defn of f) f(n j ) = g(n j ) + h(n j )(Defn of f) f(n i ) = g(n i ) + h(n i ) = g*(n i ) + h(n i ) ---Eq 1 (since n i is picked for expansion n i is on optimal path) With the similar argument for n j we can write the following: f(n j ) = g(n j ) + h(n j ) = g*(n j ) + h(n j ) ---Eq 2 Also, h(n i ) < = h(n j ) + c(n i, n j ) ---Eq 3(Parent- child relation) g*(n j ) = g*(n i ) + c(n i, n j ) ---Eq 4(both nodes on optimal path) From Eq 1, 2, 3 and 4 f(n i ) <= f(n j ) Hence proved.

Better way to understand monotonicity of f() Let f(n 1 ), f(n 2 ), f(n 3 ), f(n 4 )… f(n k-1 ), f(n k ) be the f values of k expanded nodes. The relationship between two consecutive expansions f(n i ) and f(n i+1 ) nodes always remains the same, i.e., f(n i ) <= f(n i+1 ) This because f(n i )= g(n i ) +h(n i ) and g(n i )=g*(n i ) since n i is an expanded node (proved theorem) and this value cannot change h(n i ) value also cannot change Hence nothing after n i+1 ’s expansion can change the above relationship.

A list of AI Search Algorithms A* AO* IDA* (Iterative Deepening) Minimax Search on Game Trees Viterbi Search on Probabilistic FSA Hill Climbing Simulated Annealing Gradient Descent Stack Based Search Genetic Algorithms Memetic Algorithms