Monotonicity Admissible Search: “That finds the shortest path to the Goal” Monotonicity: local admissibility is called MONOTONICITY This property ensures.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
Artificial Intelligence
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
State Space 4 Chapter 4 Adversarial Games. Two Flavors Games of Perfect Information ◦Each player knows everything that can be known ◦Chess, Othello Games.
Mahgul Gulzai Moomal Umer Rabail Hafeez
This time: Outline Game playing The minimax algorithm
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Find a Path s A D B E C F G Heuristically Informed Methods  Which node do I expand next?  What information can I use to guide this.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Games & Adversarial Search Chapter 6 Section 1 – 4.
HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe.
1 Heuristic Search 4 4.0Introduction 4.1An Algorithm for Heuristic Search 4.2Admissibility, Monotonicity, and Informedness 4.3Using Heuristics in Games.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
KU NLP Heuristic Search Heuristic Search and Expert Systems (1) q An interesting approach to implementing heuristics is the use of confidence.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
Adversarial Games. Two Flavors  Perfect Information –everything that can be known is known –Chess, Othello  Imperfect Information –Player’s have each.
Knowledge Search CPTR 314.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Adversarial Search In this lecture, we introduce a new search scenario: game playing 1.two players, 2.zero-sum game, (win-lose, lose-win, draw) 3.perfect.
Adversarial Search and Game-Playing
Announcements Homework 1 Full assignment posted..
Last time: search strategies
AI Classnotes #5, John Shieh, 2012
Iterative Deepening A*
CS 460 Spring 2011 Lecture 4.
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
Pengantar Kecerdasan Buatan
Artificial Intelligence Chapter 12 Adversarial Search
Game playing.
Chapter 6 : Game Search 게임 탐색 (Adversarial Search)
Alpha-Beta Search.
EA C461 – Artificial Intelligence
NIM - a two person game n objects are in one pile
Alpha-Beta Search.
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Alpha-Beta Search.
Pruned Search Strategies
Minimax strategies, alpha beta pruning
Approaches to search Simple search Heuristic search Genetic search
Game Playing Fifth Lecture 2019/4/11.
Alpha-Beta Search.
Artificial Intelligence
Adversarial Search and Game Playing Examples
CS 416 Artificial Intelligence
Alpha-Beta Search.
Minimax strategies, alpha beta pruning
Unit II Game Playing.
Presentation transcript:

Monotonicity Admissible Search: “That finds the shortest path to the Goal” Monotonicity: local admissibility is called MONOTONICITY This property ensures consistently minimal path to each state they encounter in the search.

It takes accumulative effect into consideration (for a distance problem) Along any path from the root, the cost never decreases (If this is true then the Heuristic is Monotonic in nature.) f(n) = g(n) + h(n) = = 7 f’ (n) = g' (n’) + h' (n’) = = n n’

Non-MonotonicMonotonic Iff(n`) < f(n)(Non - Monotonic) Then f(n`) = max (f(n), g(n`) + h(n`)) “We take the cost of parent Node”. Pathmax.This when only heuristic cost is taken Another representation =h(n) - h(n`) <= cost (n, n`)

Informedness. For two A “heuristic h 1 and h 2 if h 1 (n) <= h 2 (n) for all states ‘n’ in the search space, heuristic h 2 is said to be “more informed” than h 1. Both h 1 and h 2 can give OPTIMAL path but h 2 examines very few states in the process.

Monotonic Heuristics are Admissible States = S 1, S 2, …, S g S 1 = Start S g = goal h(s 1 ) - h(s 2 )<=cost (s 1, s 2 ) h(s 2 ) - h(s 3 )<=cost (s 2, s 3 ) h (g-1) - h (g) <=cost (S g-1, S g ) ADDh (s1) - h (g) <=cost (S 1, S g )

h (n) =0Uninformed search Example Breadth - First search A * is more informed then Breadth - first search

Adversary Search (Games) AIM:The aim is to move in such a way as to ‘stop’ the opponent from making a good / winning move. Game playing can use Tree - Search. The tree or game - tree alternates between two players.

Things to Remember: 1.Every move is vital 2.The opponent could win at the next move or subsequent moves. 3.Keep track of the safest moves 4.The opponent is well - informed 5.How the opponent is likely to response to your moves.

Two move win Player 1 = P 1 Player 2 = P 2 Safest move for P 1 is always AC Safest move for P 2 is always AD (if allowed 1 st move) A B C D EFGHIJ P1 moves P2 moves P1P2 P1 P1 P2 P2 wins

Minimax Procedure for Games Assumption:Opponent has same knowledge of state space and make a consistent effort to WIN. MIN:Label for the opponent trying to minimize other player’s (MAX) score. MAX:Player trying to win (maximise advantage) Both MAX and MIN are equally informed

Rules 1. Label level’s MAX and MIN 2. Assign values to leaf nodes: 0 if MIN wins 1 if MAX wins 3. Propagate values up the graph. If parent is MAX, assign it Max-value of its children If parent is MIN, assign it min-value of its children

Minimaxing to fixed to play depth (Complex games) Strategy:n - move look ahead - Suppose you start in the middle of the game. - One cannot assign WIN/LOOSE values at that stage - In this case some heuristics evaluation is applied - values are then projected back to supply indicate WINNING/LOOSING trend.

Summary Assign heuristic values to leaves of n-level graph Propagate value to root node This value indicates best state that can be reached in ‘n’ moves from this start - state or node MAXIMIZEforMAXParents MINIMIZEforMINparents

Example: TIC - TAC - TOE M(n) = Total of my possible winning lines O(n) = Trial of OpponentsE(n) = M(n) - O(n) winning lines X X X O O O

Horizon Effect Heuristics applied with limited ahead may lead to a bad situation and the person may leave the game. Same depth in search can be added to partially reduce this affect.

Alpha - Beta Procedures Minimax procedure pursues all branches in the space. Same of them could have been ignored or pruned. To improve efficiency pruning is applied to two person games

Simple Idea if A > 5 or B 5 and B <0If the first condition A > 5 succeeds thenfailed then evaluating B < 0 may not be evaluated.B < 0 is unnecessary.

- MAX can score maximum of -0.2 when moves a - c - e. MAX has a batter option to move to ‘b’ a b = 0.4 d = 0.6 f = -0.5g = -0.2 e c -0.2 MAX MIN MAX MIN

- MAX node neglects values <= a (atleast it can score) at MIN nodes below it. - MIN node neglects values >= b (almost it can score) at MAX nodes below it A B =10 C G=0 H MAX MIN C node can score ATMOST 0 nothing above 0 (beta) A node can score ATLEAST 10 nothing less than 10 (alpha)

Complexity Reduction Complexity Cost:“Can be estimated roughly through measuring the size of open and closed list.” (A)Beam Search:“In beam search only the ‘n’ most pronishing state are best for future consideration” - The procedure may miss the solution by pruning it too early. “Bound applied to the open list.”

(B) More Informed ness -Apply more informed heuristics to reduce the complexity. -This may increase the computational cost in computing the heuristic