Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
Game Playing Games require different search procedures. Basically they are based on generate and test philosophy. At one end, generator generates entire.
CS 484 – Artificial Intelligence
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
Hoe schaakt een computer? Arnold Meijster. Why study games? Fun Historically major subject in AI Interesting subject of study because they are hard Games.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chess AI’s, How do they work? Math Club 10/03/2011.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Artificial Intelligence in Game Design
This time: Outline Game playing The minimax algorithm
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
How computers play games with you CS161, Spring ‘03 Nathan Sturtevant.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
ICS-270a:Notes 5: 1 Notes 5: Game-Playing ICS 270a Winter 2003.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Notes adapted from lecture notes for CMSC 421 by B.J. Dorr
PSU CS 370 – Introduction to Artificial Intelligence Game MinMax Alpha-Beta.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Adversarial Search CS311 David Kauchak Spring 2013 Some material borrowed from : Sara Owsley Sood and others.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Search: Games & Adversarial Search Artificial Intelligence CMSC January 28, 2003.
Game Playing Why do AI researchers study game playing?
Adversarial Search and Game-Playing
PENGANTAR INTELIJENSIA BUATAN (64A614)
CS Fall 2016 (Shavlik©), Lecture 11, Week 6
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Alpha-Beta Search.
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Mini-Max search Alpha-Beta pruning General concerns on games
Adversarial Search CS 171/271 (Chapter 6)
Games & Adversarial Search
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Game Playing Chapter 5

Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver l there is an opponent (hostile agent) §Since it is a search problem, we must specify states & operations/actions l initial state = current board; operators = legal moves; goal state = game over; utility function = value for the outcome of the game l usually, (board) games have well-defined rules & the entire state is accessible

Basic idea §Consider all possible moves for yourself §Consider all possible moves for your opponent §Continue this process until a point is reached where we know the outcome of the game §From this point, propagate the best move back l choose best move for yourself at every turn l assume your opponent will make the optimal move on their turn

Examples §Tic-tac-toe §Connect Four §Checkers

Problem §For interesting games, it is simply not computationally possible to look at all possible moves l in chess, there are on average 35 choices per turn l on average, there are about 50 moves per player l thus, the number of possibilities to consider is

Solution §Given that we can only look ahead k number of moves and that we can’t see all the way to the end of the game, we need a heuristic function that substitutes for looking to the end of the game l this is usually called a static board evaluator (SBE) l a perfect static board evaluator would tell us for what moves we could win, lose or draw l possible for tic-tac-toe, but not for chess

Creating a SBE approximation §Typically, made up of rules of thumb l for example, in most chess books each piece is given a value pawn = 1; rook = 5; queen = 9; etc. l further, there are other important characteristics of a position e.g., center control l we put all of these factors into one function, weighting each aspect differently potentially, to determine the value of a position board_value =  * material_balance +  * center_control + … [the coefficients might change as the game goes on]

Compromise §If we could search to the end of the game, then choosing a move would be relatively easy l just use minimax §Or, if we had a perfect scoring function (SBE), we wouldn’t have to do any search (just choose best move from current state -- one step look ahead) §Since neither is feasible for interesting games, we combine the two ideas

Basic idea §Build the game tree as deep as possible given the time constraints §apply an approximate SBE to the leaves §propagate scores back up to the root & use this information to choose a move §example

Score percolation: MINIMAX §When it is my turn, I will choose the move that maximizes the (approximate) SBE score §When it is my opponent’s turn, they will choose the move that minimizes the SBE l because we are dealing with competitive games, what is good for me is bad for my opponent & what is bad for me is good for my opponent l assume the opponent plays optimally [worst-case assumption]

MINIMAX algorithm §Start at the the leaves of the trees and apply the SBE §If it is my turn, choose the maximum SBE score for each sub-tree §If it is my opponent’s turn, choose the minimum score for each sub-tree §The scores on the leaves are how good the board appears from that point §Example

Example

Alpha-beta pruning §While minimax is an effective algorithm, it can be inefficient l one reason for this is that it does unnecessary work l it evaluates sub-trees where the value of the sub- tree is irrelevant l alpha-beta pruning gets the same answer as minimax but it eliminates some useless work l example simply think: would the result matter if this node’s score were +infinity or -infinity?

Cases of alpha-beta pruning §Min level (alpha-cutoff) l can stop expanding a sub-tree when a value less than the best-so-far is found this is because you’ll want to take the better scoring route [example] §Max level (beta-cutoff) l can stop expanding a sub-tree when a value greater than best-so-far is found this is because the opponent will force you to take the lower-scoring route [example]

Alpha-beta algorithm §Maximizer’s moves have an alpha value l it is the current lower bound on the node’s score (i.e., max can do at least this well) l if alpha >= beta of parent, then stop since opponent won’t allow us to take this route §Minimizer’s moves have a beta value l it is the current upper bound on the node’s score (i.e., it will do no worse than this) l if beta <= alpha of parent, then stop since we (max) will won’t choose this

Example

Use §We project ahead k moves, but we only do one (the best) move then §After our opponent moves, we project ahead k moves so we are possibly repeating some work §However, since most of the work is at the leaves anyway, the amount of work we redo isn’t significant (think of iterative deepening)

Alpha-beta performance §Best-case: can search to twice the depth during a fixed amount of time [O(b d/2 ) v. O(b d )] §Worst-case: no savings l alpha-beta pruning & minimax always return the same answer l the difference is the amount of work they do l effectiveness depends on the order in which successors are examined want to examine the best first §Graph of savings

Refinements §Waiting for quiescence l avoids the horizon effect disaster is lurking just beyond our search depth on the nth move (the maximum depth I can see) I take your rook, but on the (n+1)th move (a depth to which I don’t look) you checkmate me l solution when predicted values are changing frequently, search deeper in that part of the tree (quiescence search)

Secondary search §Find the best move by looking to depth d §Look k steps beyond this best move to see if it still looks good §No? Look further at second best move, etc. l in general, do a deeper search at parts of the tree that look “interesting” §Picture

Book moves §Build a database of opening moves, end games, tough examples, etc. §If the current state is in the database, use the knowledge in the database to determine the quality of a state §If it’s not in the database, just do alpha-beta pruning

AI & games §Initially felt to be great AI testbed §It turned out, however, that brute-force search is better than a lot of knowledge engineering l scaling up by dumbing down perhaps then intelligence doesn’t have to be human- like l more high-speed hardware issues than AI issues l however, still good test-beds for learning