Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Advertisements

Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
CMSC 671 Fall 2001 Class #8 – Thursday, September 27.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
Adversarial Search Chapter 6.
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Artificial Intelligence in Game Design
Problem Solving Using Search Reduce a problem to one of searching a graph. View problem solving as a process of moving through a sequence of problem states.
This time: Outline Game playing The minimax algorithm
Lecture 02 – Part C Game Playing: Adversarial Search
Game Playing CSC361 AI CSC361: Game Playing.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
PSU CS 370 – Introduction to Artificial Intelligence Game MinMax Alpha-Beta.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Adversarial Search CS30 David Kauchak Spring 2015 Some material borrowed from : Sara Owsley Sood and others.
Game Playing.
AD FOR GAMES Lecture 4. M INIMAX AND A LPHA -B ETA R EDUCTION Borrows from Spring 2006 CS 440 Lecture Slides.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Notes on Game Playing by Yun Peng of theYun Peng University of Maryland Baltimore County.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Instructor: Vincent Conitzer
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Games 1 Alpha-Beta Example [-∞, +∞] Range of possible values Do DF-search until first leaf.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search Adversarial Search (game playing search) We have experience in search where we assume that we are the only intelligent being and we.
Pigeon Problems Revisited Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
Game Playing: Adversarial Search chapter 5. Game Playing: Adversarial Search  Introduction  So far, in problem solving, single agent search  The machine.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Adversarial Search 2 (Game Playing)
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search and Game-Playing
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Adversarial Search.
Game Playing: Adversarial Search
NIM - a two person game n objects are in one pile
Game Playing: Adversarial Search
Game Playing Chapter 5.
CS51A David Kauchak Spring 2019
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets consider what happens when we relax those assumptions.

Two Player Games Max always moves first. Min is the opponent. We have An initial state. A set of operators. A terminal test (which tells us when the game is over). A utility function (evaluation function). The utility function is like the heuristic function we have seen in the past, except it evaluates a node in terms of how good it is for each player. Positive values indicate states advantageous for Max, negative values indicate states advantageous for Min. Max Vs Min

... O1 Utility Terminal States MaxMinMax...

3128 A 11 A 12 A A 21 A 22 A23A A 31 A 32 A 33 A simple abstract game. Max makes a move, then Min replies. A3A3 A2A2 A1A1 An action by one player is called a ply, two ply (a action and a counter action) is called a move.

Generate the game tree down to the terminal nodes. Apply the utility function to the terminal nodes. For a S set of sibling nodes, pass up to the parent… the lowest value in S if the siblings are the largest value in S if the siblings are Recursively do the above, until the backed-up values reach the initial state. The value of the initial state is the minimum score for Max. The Minimax Algorithm

3128 A 11 A 12 A A 21 A 22 A23A A 31 A 32 A A3A3 A2A2 A1A1 In this game Max’s best move is A 1, because he is guaranteed a score of at least 3.

Although the Minimax algorithm is optimal, there is a problem… The time complexity is O(b m ) where b is the effective branching factor and m is the depth of the terminal states. (Note space complexity is only linear in b and m, because we can do depth first search). One possible solution is to do depth limited Minimax search. Search the game tree as deep as you can in the given time. Evaluate the fringe nodes with the utility function. Back up the values to the root. Choose best move, repeat. We would like to do Minimax on this full game tree... … but we don’t have time, so we will explore it to some manageable depth. cutoff

Example Utility Functions I Tic Tac Toe Assume Max is using “X” e(n) = if n is win for Max, +  if n is win for Min, -  else (number of rows, columns and diagonals available to Max) - (number of rows, columns and diagonals available to Min) XOX O X O e(n) = = 2 e(n) = = 1

Example Utility Functions II Chess I Assume Max is “White” Assume each piece has the following values pawn = 1; knight = 3; bishop = 3; rook = 5; queen = 9; let w = sum of the value of white pieces let b = sum of the value of black pieces e(n) = w - b w + b Note that this value ranges between 1 and -1

Example Utility Functions III Chess II The previous evaluation function naively gave the same weight to a piece regardless of its position on the board... Let X i be the number of squares the i th piece attacks e(n) = piece 1 value * X 1 + piece 2 value * X I have not finished the equation. The important thing to realize is that the evaluation function can be a weighted linear function of the pieces value, and its position.

Utility Functions We have seen that the ability to play a good game is highly dependant on the evaluation functions. How do we come up with good evaluation functions? Interview an expert. Machine Learning.

Alpha-Beta Pruning I We have seen how to use Minimax search to play an optional game. We have seen that because of time limitations we may have to use a cutoff depth to make the search tractable. Using a cutoff causes problems because of the “horizon” effect. Is there some way we can search deeper in the same amount of time? Yes! Use Alpha-Beta Pruning... Best move before cutoff... … but all its children are losing moves Game winning move.

Alpha-Beta Pruning II 3128 A 11 A 12 A A 21 A 22 A23A23  2 A 31 A 32 A A3A3 A2A2 A1A1 If you have an idea that is surely bad, don't take the time to see how truly awful it is. Pat Winston

Alpha-Beta Pruning III Effectiveness of Alpha-Beta  Alpha-Beta is guaranteed to compute the same Minimax value for the root node as computed by Minimax  In the worst case Alpha-Beta does NO pruning, examining b d leaf nodes, where each node has b children and a d-ply search is performed  In the best case, Alpha-Beta will examine only 2b d/2 leaf nodes. Hence if you hold fixed the number of leaf nodes then you can search twice as deep as Minimax!  The best case occurs when each player's best move is the leftmost alternative (i.e., the first child generated). So, at MAX nodes the child with the largest value is generated first, and at MIN nodes the child with the smallest value is generated first. This suggest that we should order the operators carefully... In the chess program Deep Blue, they found empirically that Alpha-Beta pruning meant that the average branching factor at each node was about 6 instead of about 35-40