Presentation on theme: "2015-5-11 EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt."— Presentation transcript:
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt
EIE426-AICV 2 Contents: Games vs. search problems Minimax algorithm Resource limits - pruning Games of chance Practical issues Example: Tic Tac Toe
EIE426-AICV 3 Games vs. search problems “Unpredictable” opponent => solution is a strategy specifying a move for every possible opponent reply Time limits => unlikely to find goal, must approximate plan of attack Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, ) Pruning to allow deeper search (McCarthy, 1956)
EIE426-AICV 4 Types of games
EIE426-AICV 5 Game tree (2-player, deterministic, turns)
EIE426-AICV 6 Game Trees You learn about how programs can play games, such as checkers and chess. The ply of a game tree, p = d, where d is the depth of a tree. p = d = 2
Game Trees (cont.) A game tree is a representation That is a semantic tree In which Nodes denote board configurations Branches denote moves With writers that Establish that a node is for the maximizer or for the minimizer Connect a board configuration with a board-configuration description With readers that Determine whether the node is for the maximizer of minimizer Produce a board configuration’s description Exhaustive Search Is Impossible! EIE426-AICV 7
EIE426-AICV8 Minimax Perfect play for deterministic, perfect-information games Idea: choose move to position with highest minimax value = best achievable payoff against best play Evaluations
EIE426-AICV 9 Minimax algorithm
EIE426-AICV 10 Properties of minimax Complete ？ Yes, if tree is finite (chess has specific rules for this) Optimal ？ Yes, against an optimal opponent. Otherwise?? Time ？ complexity O(b^m) Space ？ complexity O(bm) (depth-first exploration) For chess, b 35, m 100 for “reasonable” games => exact solution completely infeasible
EIE426-AICV 11 Resource limits Suppose we have 100 seconds, explore 10^4 nodes/second =>10^6 nodes per move Standard approach: cutoff test, e.g., depth limit evaluation function = estimated desirability of position
EIE426-AICV 12 Evaluation functions For chess, typically linear weighted sum features Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + … + w n f n (s) e.g., w 1 = 9 with f 1 (s) = (number of white queens) - (number of black queens), etc.
EIE426-AICV 13 Exact values don't matter for deterministic games Behaviour is preserved under any monotonic transformation of Eval Only the order matters: payoff in deterministic games acts as an ordinal utility function.
EIE426-AICV 14 Cutting off search MinimaxCutoff is identical to MinimaxValue except 1. Terminal? is replaced by Cutoff? 2. Utility is replaced by Eval Does it work in practice? b^m = 10^6, b=35 => m=4 4-ply lookahead is a hopeless chess player! 4-ply human novice 8-ply typical PC, human master 12-ply Deep Blue, Kasparov
EIE426-AICV 15 - pruning example The Principle: If you have an idea that is surely bad, do not take time to see how truly awful it is.
EIE426-AICV 16 Another example Max Min The step number of the process
EIE426-AICV 17 Properties of - Pruning does not affect final result Good move ordering improves effectiveness of pruning With “perfect ordering,” time complexity = O(b m/2 ) => doubles depth of search => can easily reach depth 8 and play good chess A simple example of the value of reasoning about which computations are relevant (a form of meta reasoning)
EIE426-AICV 18 Why is it called - ? is the best value (to max) found so far off the current path If V is worse than, max will avoid it => prune that branch Define similarly for min
EIE426-AICV 19 - May Not Prune Many Branches from the Tree For an optimally arranged tree, the number of static evaluations is given by for d even; for d odd. where b is the number of branches and d is the depth of a tree.
EIE426-AICV 20 An Optimally Arranged Game Tree Minimizing Maximizing With d = 3 and b = 3, S = – 1 =11
EIE426-AICV 21 Deterministic games in practice Checkers: Chinook (a program) ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is a board game played between two players, who alternate moves. The player who cannot move, because he has no pieces, or because all of his pieces are blocked, loses the game. Players can resign or agree to draws.
EIE426-AICV 22 Deterministic games in practice (cont.) Chess: Deep Blue defeated human world champion Gary Kasparov in a six-game match in Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply.
EIE426-AICV 23 Deterministic games in practice (cont.) Othello: human champions refuse to compete against computers, who are too good. The object of the game is to have the majority of your colour discs on the board at the end of the game.
EIE426-AICV 24 Deterministic games in practice (cont.) Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves.
EIE426-AICV 25 Nondeterministic games: backgammon The object of the game is move all your checkers into your own home board and then bear them off. The first player to bear off all of their checkers wins the game.
EIE426-AICV 26 Nondeterministic games in general In nondeterministic games, chance introduced by dice, card- shuffling. Simplified example with coin-flipping:
EIE426-AICV 27 Algorithm for nondeterministic games Expectiminimax gives perfect play Just like Minimax, except we must also handle chance nodes: if state is a Max node then return the highest ExpectiMinimax-Value of Successors(state) if state is a Min node then return the lowest ExpectiMinimax-Value of Successors(state) if state is a chance node then return average of ExpectiMinimax-Value of Successors(state)
EIE426-AICV 28 Nondeterministic games in practice Dice rolls increase b: 21 possible rolls with 2 dice Backgammon 20 legal moves (can be 6,000 with 1-1 roll) Depth 4 = 20 x (21 x 20)^3 1.2 x 10^9 As depth increases, probability of reaching a given node shrinks => value of lookahead is diminished - pruning is much less effective TDGammon (Commercial backgammon playing programs) uses depth-2 search + very good Eval world-champion level
EIE426-AICV 29 Exact values DO matter for nondeterministic games Behaviour is preserved only by positive linear transformation of Eval. Hence Eval should be proportional to the expected payoff.
EIE426-AICV 30 Interactive Deepening Keeps Computing Within Time Bounds The number of static evaluations up to the (d-1)-th ply: The number of static evaluations at the d-th ply:
EIE426-AICV 31 Heuristic Pruning Limits Search b(child) = b(parent) - r(child) r(): the rank of a child, 1, 2, … The best child The worst child Reduce the number of static evaluations.
EIE426-AICV 32 Example: Tic Tac Toe
EIE426-AICV 33 Example: The Minimax Procedure Applied to Playing Tic-Tac-Toe
EIE426-AICV 34 Two-ply minimax applied to the opening move of tic- tac- toe Min’s move
EIE426-AICV 35 Two-ply minimax applied to X’s second move of tic- tac- toe
EIE426-AICV 36 Two-ply minimax applied to X’s move near end game
EIE426-AICV 37 Summary Games are fun to work on! (and dangerous) They illustrate several important points about AI perfection is unattainable => must approximate good idea to think about what to think about uncertainty constrains the assignment of values to states Games are to AI as grand prix racing is to automobile design