Adversarial Search In this lecture, we introduce a new search scenario: game playing 1.two players, 2.zero-sum game, (win-lose, lose-win, draw) 3.perfect.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
CMSC 671 Fall 2001 Class #8 – Thursday, September 27.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search: Game Playing Reading: Chapter next time.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
This time: Outline Game playing The minimax algorithm
Lecture 02 – Part C Game Playing: Adversarial Search
Game Playing CSC361 AI CSC361: Game Playing.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Game-Playing Read Chapter 6 Adversarial Search. Game Types Two-person games vs multi-person –chess vs monopoly Perfect Information vs Imperfect –checkers.
An eye for eye only ends up making the whole world blind. -Mohandas Karamchand Gandhi, born October 2 nd, Lecture of October 2 nd, 2001.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search and Game Playing Examples. Game Tree MAX’s play  MIN’s play  Terminal state (win for MAX)  Here, symmetries have been used to reduce.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game-Playing Read Chapter 6 Adversarial Search. State-Space Model Modified States the same Operators depend on whose turn Goal: As before: win or win.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
CSC 412: AI Adversarial Search
CHAPTER 6 : ADVERSARIAL SEARCH
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Chapter 12 Adversarial Search. (c) 2000, 2001 SNU CSE Biointelligence Lab2 Two-Agent Games (1) Idealized Setting  The actions of the agents are interleaved.
Notes on Game Playing by Yun Peng of theYun Peng University of Maryland Baltimore County.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search. Game playing u Multi-agent competitive environment u The most common games are deterministic, turn- taking, two-player, zero-sum game.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
Game Playing: Adversarial Search chapter 5. Game Playing: Adversarial Search  Introduction  So far, in problem solving, single agent search  The machine.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search and Game Playing. Topics Game playing Game trees Minimax Alpha-beta pruning Examples.
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Adversarial Search Chapter Two-Agent Games (1) Idealized Setting – The actions of the agents are interleaved. Example – Grid-Space World – Two.
Iterative Deepening A*
PENGANTAR INTELIJENSIA BUATAN (64A614)
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Tutorial 5 Adversary Search
Game Playing: Adversarial Search
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Mini-Max search Alpha-Beta pruning General concerns on games
Game Playing: Adversarial Search
Based on slides by: Rob Powers Ian Gent
Artificial Intelligence
Game Playing Chapter 5.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Adversarial Search In this lecture, we introduce a new search scenario: game playing 1.two players, 2.zero-sum game, (win-lose, lose-win, draw) 3.perfect accessibility to game information (unlike bridge) 4. Deterministic rule (no dice used, unlike Back-gammon) The problem is still formulated as a graph on a state space with each state being a possible configuration of the game.

The Search Tree MAX MIN MAX We call the two players Min and Max respectively. Due to space/time complexity, an algorithm often can afford to search to a certain depth of the tree.

Game Tree Then for a leave s of the tree, we compute a static evaluation function e(s) based on the configuration s, for example piece advantage, control of positions and lines etc. We call it static because it is based on a single node not by looking ahead of some steps. Then for non-terminal nodes, We compute the backed-up values propagated from the leaves. For example, for a tic-tac-toe game, we can select e(s) = # of open lines of MAX --- # of open lines of MIN X O e(s) = 5 – 4 =1 X O X O S =

Alpha-Beta Search During the min-max search, we find that we may prune the search tree to some extent without changing the final search results. Now for each non-terminal nodes, we define an  and a  values A MIN nodes updates its  value as the minimum among all children who are evaluated. So the  value decreases monotonically as more children nodes are evaluated. Its  value is passed from its parent as a threshold. A MAX nodes updates its  value as the maximum among all children who are evaluated. So the  value increases monotonically as more children nodes are evaluated. Its  value is passed from its parent as a threshold.

Alpha-Beta Search We start the alpha-beta search with AB(root,  = - infinity,  +infinity), Then do a depth-first search which updates (  ) along the way, we terminate a node whenever the following cut-off condition is satisfied.  A recursive function for the alpha-beta search with depth bound D max

Float: AB( s,  ) { if (depth( s ) == D max ) return (e( s )); // leaf node for k =1 to b( s ) do // b( s ) is the out-number of s { s’ := k-th child node of s if ( s is a MAX node) {  := max( , AB( s’,  )) // update  for max node if (  ) return (  ); //  –cut }  else  {  := min( , AB( s’,  )) // update  for min node  if (  ) return (  ); //  –cut  }  } // end for  if ( s is a MAX node) return (  ); // all children expanded  else return (  );  }