Advanced Artificial Intelligence

Slides:



Advertisements
Similar presentations
Chapter 6, Sec Adversarial Search.
Advertisements

Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Games and adversarial search
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Adversarial Search Game Playing Chapter 6. Outline Games Perfect Play –Minimax decisions –α-β pruning Resource Limits and Approximate Evaluation Games.
Adversarial Search CSE 473 University of Washington.
Adversarial Search Chapter 6.
Artificial Intelligence for Games Game playing Patrick Olivier
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
This time: Outline Game playing The minimax algorithm
Games and adversarial search
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
Games & Adversarial Search Chapter 6 Section 1 – 4.
CSE 473: Artificial Intelligence Spring 2012
CSE 473: Artificial Intelligence Autumn 2011 Adversarial Search Luke Zettlemoyer Based on slides from Dan Klein Many slides over the course adapted from.
CSE 573: Artificial Intelligence Autumn 2012
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Quiz 4 : CSP  Tree-structured CSPs require only polynomial time.True  If a CSP is not tree-structured, you can always transform it into a tree. True.
Adversarial Search CS311 David Kauchak Spring 2013 Some material borrowed from : Sara Owsley Sood and others.
Adversarial Search. Game playing Perfect play The minimax algorithm alpha-beta pruning Resource limitations Elements of chance Imperfect information.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Adversarial Search Chapter 6 Section 1 – 4. Search in an Adversarial Environment Iterative deepening and A* useful for single-agent search problems What.
CHAPTER 4 PROBABILITY THEORY SEARCH FOR GAMES. Representing Knowledge.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
CSE373: Data Structures & Algorithms Lecture 23: Intro to Artificial Intelligence and Game Theory Based on slides adapted Luke Zettlemoyer, Dan Klein,
Paula Matuszek, CSC 8520, Fall Based in part on aima.eecs.berkeley.edu/slides-ppt 1 CS 8520: Artificial Intelligence Adversarial Search Paula Matuszek.
CS 188: Artificial Intelligence Fall 2006 Lecture 7: Adversarial Search 9/19/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
CS 188: Artificial Intelligence Fall 2008 Lecture 6: Adversarial Search 9/16/2008 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
5/4/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Game Playing Why do AI researchers study game playing?
Announcements Homework 1 Full assignment posted..
Adversarial Search Chapter 5.
CS 188: Artificial Intelligence Fall 2007
Expectimax Lirong Xia. Expectimax Lirong Xia Project 2 MAX player: Pacman Question 1-3: Multiple MIN players: ghosts Extend classical minimax search.
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Games & Adversarial Search
Games & Adversarial Search
Lecture 1C: Adversarial Search(Game)
Expectimax Lirong Xia.
CSE 473: Artificial Intelligence Spring 2012
Minimax strategies, alpha beta pruning
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Adversarial Search CS 171/271 (Chapter 6)
Minimax strategies, alpha beta pruning
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Presentation transcript:

Advanced Artificial Intelligence Lecture 3: Adversarial Search(Game)

Outline Games (Textbook 5.1) optimal decisions in games (5.2) alpha-beta pruning (5.3) stochastic games(5.5) 

Types of Games Deterministic (Chess) Stochastic (Soccer) (Also multi-agent per team) Partially Observable (Poker) (Also n > 2 players; stochastic) Large state space (Go)

Game Playing State-of-the-Art Chess: Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply. Current programs are even better. Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443 billion positions. Checkers is now solved! Othello: Human champions refuse to compete against computers, which are too good. Go: Human champions are just beginning to be challenged by machines, though the best humans still beat the best machines. In Go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, along with aggressive pruning and Monte Carlo roll-outs.

Deterministic, Fully Observable Many possible formalizations, one is: States: S (start at s0) Players: P={1...N} (usually take turns; often N=2) Actions: A (may depend on player / state) Transition Function: T(s,a)  s’ (Simultaneous moves: T(s, {ai})  s’ Terminal Test: Terminal(s)  {t,f} Terminal Utilities: U(s,player)  R Solution for a player is a policy: π(s)  a

Deterministic Single-Player Deterministic, single player (solitaire), perfect information: Know the rules Know what actions do Know when you win … it’s just search! Slight reinterpretation: Each node stores a value: the best outcome it can reach This is the maximal outcome of its children (the max value) Note that we don’t have path sums as before (utilities at end) win lose

Deterministic Two-Player Minimax values: computed recursively Deterministic, zero-sum games: Tic-tac-toe, chess, checkers One player maximizes result The other minimizes result Minimax search: A state-space search tree Players alternate turns Each node has a minimax value: best achievable utility against a rational adversary 5 max 2 5 min Terminal values: part of the game 8 2 5 6

Computing Minimax Values Two recursive functions: max-value maxes the values of successors min-value mins the values of successors def value(state): if the state is a terminal state: return the state’s utility if the agent to play is MAX: return max-value(state) if the agent to play is MIN: return min-value(state) def max-value(state): initialize max = -∞ for (a,s) in successors(state): v ← value(s) max ← maximum(max, v) return max Pick an order for two reasons: sequential processor and pruning def policy(state): ss = successors(state) return argmax(ss, key=value)

Tic-tac-toe Game Tree

Minimax Example max 3 2 1 3 12 8 2 3 9 14 1 8 3

Minimax Properties Optimal against a perfect player. Against non-perfect player? Time complexity? (depth: m; branching factor: b) O(bm) Space complexity? For chess, b  35, m  100 Exact solution is completely infeasible But, do we need to explore the whole tree? max min 10 11 9 …

Overcoming Computational Limits Cannot search to leaves in most games Depth-limited search Instead, search a limited depth of tree Replace terminal utilities with a heuristic evaluation function Guarantee of optimal play is gone More plies makes a BIG difference (as does good evaluation function) Example: Chess program Suppose we have 100 seconds, can explore 10K nodes / sec So can check 1M nodes per move Minimax won’t finish depth 4: novice If we could reach depth 8: decent How could we achieve that? max 4 min min -2 4 -1 limit=2 -2 4 9 ? ? ? ?

Depth-Limited Search Still two recursive functions: max-value and min-value def value(state, limit): if the state is a terminal state: return U(state) if limit = 0: return evaluation_function(state) if the agent to play is MAX: return max-value(state, limit) if the agent to play is MIN: return min-value(state, limit) def max-value(state, limit): initialize max = -∞ for (a,s) in successors(state): v ← value(s, limit-1) max ← maximum(max, v) return max Pick an order for two reasons: sequential processor and pruning

Evaluation Functions Function which scores non-terminals Ideal function: returns the utility of the position In practice: typically weighted linear sum of features: e.g. f1(s) = (num white queens – num black queens), etc.

Pruning in Minimax max 3 ≤2 ≤1 3 12 8 2 14 1 3

-: Pruning in Depth-Limited Search General configuration  is the best value that MAX can get at any choice point along the current path If n becomes worse than , MAX will avoid it, so can stop considering n’s other children Define  similarly for MIN Player Opponent  Player Opponent n

Another - Pruning Example 3 2 14 3 ≤2 ≤1 3 12 5 1 ≥8 8

- Pruning Algorithm  v

- Pruning Properties Pruning has no effect on final action computed Good move ordering improves effectiveness of pruning Put best moves first (left-to-right) With “perfect ordering”: Time complexity drops from O(bm) to O(bm/2) Doubles solvable depth Chess: from bad to good player, but far from perfect A simple example of metareasoning, here reasoning about which computations are relevant

Stochasticity

Expectimax Search Trees What if we don’t know what the result of an action will be? E.g., In Solitaire, next card is unknown In Backgammon, dice roll unknown In Tetris, next piece In Minesweeper, mine locations In Pacman, random ghost moves Solitaire: do expectimax search Max nodes as in minimax search Chance nodes are like min nodes, except the outcome is uncertain Chance nodes take average (expectation) of value of children This is a Markov Decision Process couched in the language of trees max chance 10 4 5 7

Reminder: Expectations We can define function f(X) of a random variable X The expected value, E[f(X)], is the average value, weighted by the probability of each value X=xi Example: How long to get to the airport? Length of driving time as a function of traffic, L(T): L(none) = 20 min, L(light) = 30 min, L(heavy) = 60 min Given P(T) = {none: 0.25, light: 0.5, heavy: 0.25} What is my expected driving time, E[ L(T) ]? E[ L(T) ] = ∑i L(ti) P(ti) E[ L(T) ] = L(none) P(none) + L(light) P(light) + L(heavy) P(heavy) E[ L(T) ] = (20 * 0.25) + (30 * 0.5) + (60 * 0.25) = 35 min

Expectimax Search In expectimax search, we have a probabilistic model of how the opponent (or environment) will behave in any state Model could be a simple uniform distribution (roll a die) Model could be sophisticated and require a great deal of computation We have a node for every outcome out of our control: opponent or environment The model might say that adversarial actions are likely! For now, assume for any state we magically have a distribution to assign probabilities to opponent actions / environment outcomes Having a probabilistic belief about an agent’s action does not mean that agent is flipping any coins!

Expectimax Algorithm def value(s) if s is a max node return maxValue(s) if s is an exp node return expValue(s) if s is a terminal node return evaluation(s) def maxValue(s) values = [value(s’) for (a,s’) in successors(s)] return max(values) def expValue(s) weights = [probability(s, a, s’) for (a,s’) in successors(s)] return expectation(values, weights) 8 4 5 6

Expectimax Example 23/3 23/3 4 21/3

Expectimax Pruning? 23/3 23/3 4 21/3

Expectimax Evaluation Evaluation functions quickly return an estimate for a node’s true value (which value, expectimax or minimax?) For minimax, evaluation function scale doesn’t matter We just want better states to have higher evaluations (get the ordering right) For expectimax, we need magnitudes to be meaningful 40 20 30 1600 400 900 x2

Expectiminimax E.g. Backgammon Environment is an extra player that moves after each agent Combines minimax and expectimax ExpectiMinimax-Value(state): If it’s terminal or we reached our depth limit, return utility

Stochastic Two-Player Dice rolls increase b: 21 possible rolls with 2 dice Backgammon  20 legal moves Depth 2 = 20 x (21 x 20)3 = 1.2 x 109 As depth increases, probability of reaching a given search node shrinks So usefulness of search is diminished So limiting depth is less damaging But pruning is trickier… TDGammon uses depth-2 search + very good evaluation function + reinforcement learning: world-champion level play 1st AI world champion in any game!