Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adversarial Search. Game playing Perfect play The minimax algorithm alpha-beta pruning Resource limitations Elements of chance Imperfect information.

Similar presentations


Presentation on theme: "Adversarial Search. Game playing Perfect play The minimax algorithm alpha-beta pruning Resource limitations Elements of chance Imperfect information."— Presentation transcript:

1 Adversarial Search

2 Game playing Perfect play The minimax algorithm alpha-beta pruning Resource limitations Elements of chance Imperfect information

3 Game Playing State-of-the-Art Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is now solved! Chess: Deep Blue defeated human world champion Gary Kasparov in a six- game match in 1997. Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply. Current programs are even better, if less historic. Othello: Human champions refuse to compete against computers, which are too good. Go: Human champions are beginning to be challenged by machines, though the best humans still beat the best machines. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, along with aggressive pruning. Pacman: unknown

4 What kind of games? Abstraction: To describe a game we must capture every relevant aspect of the game. Such as: Chess Tic-tac-toe … Accessible environments: Such games are characterized by perfect information Search: game-playing then consists of a search through possible game positions Unpredictable opponent: introduces uncertainty thus game-playing must deal with contingency problems

5 Type of games

6

7 Game Playing Many different kinds of games! Axes: Deterministic or stochastic? One, two, or more players? Perfect information (can you see the state)? Turn taking or simultaneous action? Want algorithms for calculating a strategy (policy) which recommends a move in each state

8 Deterministic Games Deterministic, single player, perfect information: Know the rules Know what actions do Know when you win E.g. Freecell, 8-Puzzle, Rubik’s cube … it’s just search! Slight reinterpretation: Each node stores a value: the best outcome it can reach This is the maximal outcome of its children (the max value) Note that we don’t have path sums as before (utilities at end) After search, can pick move that leads to best node

9 Deterministic Two-Player E.g. tic-tac-toe, chess, checkers Zero-sum games One player maximizes result The other minimizes result Minimax search A state-space search tree Players alternate Each layer, or ply, consists of around of moves* Choose move to position with highest minimax value = best achievable utility against best play

10 Games vs. search problems  “Unpredictable" opponent  solution is a strategy specifying a move for every possible opponent reply  Time limits  unlikely to find goal, must approximate  Plan of attack: ◦Computer considers possible lines of play (Babbage, 1846) ◦Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) ◦Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) ◦First chess program (Turing, 1951) ◦Machine learning to improve evaluation accuracy (Samuel, 1952- 57) ◦Pruning to allow deeper search (McCarthy, 1956)

11 Searching for the next move  Complexity: many games have a huge search space Chess:b = 35, m=100nodes = 35 100 if each node takes about 1 ns to explore then each move will take about 10 50 millennia to calculate.  Resource (e.g., time, memory) limit: optimal solution not feasible/possible, thus must approximate 1. Pruning: makes the search more efficient by discarding portions of the search tree that cannot improve quality result. 2. Evaluation functions: heuristics to evaluate utility of a state without exhaustive search.

12 Two-player games A game formulated as a search problem ◦Initial state: ◦Operators: ◦Terminal state: ◦Utility function: of the board position and turn definition of legal moves conditions for when game is over a numeric value that describes the outcome game.E.g., -1, 0, 1 for loss, draw, win. (AKA payoff function)

13 Example: Tic-Tac-Toe CS561 - Lecture 7-8 - Macskassy - Spring 2011 13

14 The minimax algorithm  Perfect play for deterministic environments with perfect information  Basic idea: choose move with highest minimax value = best achievable payoff against best play  Algorithm: 1.Generate game tree completely 2.Determine utility of each terminal state 3.Propagate the utility values upward in the three by applying MIN and MAX operators on the nodes in the current level 4.At the root node use minimax decision to select the move with the max (of the min) utility value  Steps 2 and 3 in the algorithm assume that the opponent will play perfectly.

15 Generate Game Tree

16 x xx x

17 x oxx o x o xo

18 1 ply 1 move x oxx o x o xo

19 CS561 - Lecture 7-8 - Macskassy - Spring 2011 win lose draw 20 x o x ox o xox ox xo xox xox o xox ox xo xox xox oo xox xox oo xox ox xoo xox oox xo xox oox xo xox ox oxo xox xox ox o xox oox xxo xox oox xxo xox xox oxo A sub-tree

20 CS561 - Lecture 7-8 - Macskassy - Spring 2011 20 win lose draw x o x ox o xox ox xo xox xox o xox ox xo xox xox oo xox xox oo xox ox xoo xox oox xo xox oox xo xox ox oxo xox xox ox o xox oox xxo xox oox xxo xox xox oxo What is a good move?

21 CS561 - Lecture 7-8 - Macskassy - Spring 2011 21 MIN MAX MiniMax Example

22 CS561 - Lecture 7-8 - Macskassy - Spring 2011 22 MiniMax: Recursive Implementation

23 Minimax Properties Optimal against a perfect player. Otherwise?  Time complexity? ◦O(b m )  Space complexity? ◦O(bm)  For chess, b=35, m=100 ◦Exact solution is completely infeasible ◦But, do we need to explore the whole tree? 910010 max min

24 Resource Limits Cannot search to leaves Depth-limited search Instead, search a limited depth of tree Replace terminal utilities with an eval function for non-terminal positions Guarantee of optimal play is gone More plies makes a BIG difference Example: ◦ Suppose we have 100 seconds, can explore 10K nodes / sec ◦ So can check 1M nodes per move ◦ α – β reaches about depth 8 – decent chess program ???? -1-2-249 4 min max -24

25 Evaluation Functions Function which scores non-terminals  Ideal function: returns the utility of the position  In practice: typically weighted linear sum of features:  e.g. f 1 ( s ) = (num white queens – num black queens), etc.

26 Evaluation Functions

27 Why Pacman starves  He knows his score will go up by eating the dot now  He knows his score will go up just as much by eating the dot later on  There are no point- scoring opportunities after eating the dot  Therefore, waiting seems just as good as eating

28 -pruning: general principle Player If> v then MAX will chose m so prune tree under n Similar forfor MIN Player CS561 - Lecture 7-8 - Macskassy - Spring 2011 28 Opponent m n v

29 -pruning: example 1 MAX CS561 - Lecture 7-8 - Macskassy - Spring 2011 30 MIN [-∞,+∞]

30 -pruning: example 1 3 MAX CS561 - Lecture 7-8 - Macskassy - Spring 201130 MIN [-∞,+∞]

31 -pruning: example 1 3 MAX CS561 - Lecture 7-8 - Macskassy - Spring 201131 MIN [-∞,+∞] [-∞,3]

32 -pruning: example 1 3 MAX MIN [-∞,+∞] [-∞,3] 12 CS561 - Lecture 7-8 - Macskassy - Spring 201132

33 -pruning: example 1 1283 MAX MIN [-∞,+∞] [3,+∞] [-∞,3] CS561 - Lecture 7-8 - Macskassy - Spring 201133

34 -pruning: example 1 1283 MAX MIN 2 [3,2][3,2] [-∞,+∞] [3,+∞] [-∞,3] CS561 - Lecture 7-8 - Macskassy - Spring 201134

35 -pruning: example 1 1283 MAX MIN 214 [-∞,+∞] [3,+∞] [3,14][3,2][3,2] CS561 - Lecture 7-8 - Macskassy - Spring 201135 [-∞,3]

36 -pruning: example 1 1283 MAX MIN 214 [3,5][3,5] 5 [3,14] [-∞,+∞] [3,+∞] [3,2][3,2] CS561 - Lecture 7-8 - Macskassy - Spring 201136 [-∞,3]

37 -pruning: example 1 1283 MAX MIN 214 [3,5][3,5] 5 [3,14] 2 [3,2][3,2] [-∞,+∞] [3,+∞] CS561 - Lecture 7-8 - Macskassy - Spring 201137 [3,2][3,2] [-∞,3]

38 -pruning: example 1 128321452 MAX Selected move MIN [-∞,+∞] [3,+∞] [3,5][3,5] [3,14] [3,2][3,2][3,2][3,2] CS561 - Lecture 7-8 - Macskassy - Spring 201138 [-∞,3]

39 -pruning: example 2 MAX MIN [-∞,+∞] CS561 - Lecture 7-8 - Macskassy - Spring 2011 40

40 -pruning: example 2 MAX MIN 2 [-∞,2] [-∞,+∞] CS561 - Lecture 7-8 - Macskassy - Spring 201140

41 -pruning: example 2 MAX MIN 52 [-∞,2] [-∞,+∞] CS561 - Lecture 7-8 - Macskassy - Spring 201141

42 -pruning: example 2 MAX MIN 52 [-∞,2] 14 [-∞,+∞] [2,+∞] CS561 - Lecture 7-8 - Macskassy - Spring 201142

43 -pruning: example 2 MAX MIN 5214 [2,5][2,5] 5 [-∞,+∞] [2,+∞] [-∞,2] CS561 - Lecture 7-8 - Macskassy - Spring 201143

44 -pruning: example 2 MAX MIN 521451 [2,5] [2,1][2,5] [2,1] [-∞,+∞] [2,+∞] [-∞,2] CS561 - Lecture 7-8 - Macskassy - Spring 201144

45 -pruning: example 2 MAX MIN 52148 [2,8][2,8] 51 [2,5] [2,1][2,5] [2,1] [-∞,+∞] [2,+∞] [-∞,2] CS561 - Lecture 7-8 - Macskassy - Spring 201145

46 -pruning: example 2 MAX MIN 52148 [2,8][2,8] 1251 [2,5] [2,1][2,5] [2,1] [-∞,+∞] [2,+∞] [-∞,2] CS561 - Lecture 7-8 - Macskassy - Spring 201146

47 -pruning: example 2 MAX MIN 5214812 [2,8][2,3][2,8][2,3] 351 [2,5] [2,1][2,5] [2,1] [-∞,2] [-∞,+∞] [2,+∞] [3, +∞] CS561 - Lecture 7-8 - Macskassy - Spring 201147

48 MIN 5214812351 [2,8][2,3][2,8][2,3][2,5] [2,1][2,5] [2,1] [-∞,+∞] [2,+∞] [3, +∞] -pruning: example 2 MAX Selected move [-∞,2] CS561 - Lecture 7-8 - Macskassy - Spring 201148

49 -pruning: example 3 MAX MIN [6, ∞] CS561 - Lecture 7-8 - Macskassy - Spring 2011 50

50 -pruning: example 3 MAX MIN [6, ∞] [-∞,6] CS561 - Lecture 7-8 - Macskassy - Spring 2011 50

51 -pruning: example 3 MAX MIN [-∞,14] MIN [6, ∞] 14 CS561 - Lecture 7-8 - Macskassy - Spring 2011 51 [-∞,6]

52 -pruning: example 3 MAX MIN [-∞,6] [6, ∞] [-∞,14][-∞,5] 145 CS561 - Lecture 7-8 - Macskassy - Spring 2011 52

53 -pruning: example 3 MAX MIN [-∞,6] [5,6][5,6] [6, ∞] [-∞,14][-∞,5] 1459 CS561 - Lecture 7-8 - Macskassy - Spring 2011 53

54 -pruning: example 3 MAX MIN [5,1][5,1] [-∞,6] [6, ∞] [5,6][5,6] [-∞,14][-∞,5] 14591 CS561 - Lecture 7-8 - Macskassy - Spring 2011 54

55 -pruning: example 3 MAX MIN [5,4][5,4] [-∞,6] [6, ∞] [5,1][5,1] [5,6][5,6] [-∞,14][-∞,5] 145914 CS561 - Lecture 7-8 - Macskassy - Spring 2011 55

56 -pruning: example 3 MAX MIN [-∞,6] [-∞,5] [6, ∞] [5,4][5,4][5,1][5,1] [5,6][5,6] [-∞,14][-∞,5] 145914 CS561 - Lecture 7-8 - Macskassy - Spring 2011 56

57 -pruning: example 3 MAX MIN [5,4][5,4] [-∞,6] [-∞,5] [6, ∞] Selected move [5,1][5,1] [5,6][5,6] [-∞,14][-∞,5] 145914 CS561 - Lecture 7-8 - Macskassy - Spring 2011 57

58 -pruning: example 4 MAX MIN [6, ∞] CS561 - Lecture 7-8 - Macskassy - Spring 2011 59

59 -pruning: example 4 MAX MIN [6, ∞] [-∞,6] CS561 - Lecture 7-8 - Macskassy - Spring 2011 60

60 -pruning: example 4 MAX MIN [-∞,14] MIN [6, ∞] 14 CS561 - Lecture 7-8 - Macskassy - Spring 2011 60 [-∞,6]

61 -pruning: example 4 MAX MIN [-∞,6] [6, ∞] [-∞,14][-∞,7] 147 CS561 - Lecture 7-8 - Macskassy - Spring 2011 61

62 -pruning: example 4 MAX MIN [-∞,6] [7,6][7,6] [6, ∞] [-∞,14][-∞,7] 1479 CS561 - Lecture 7-8 - Macskassy - Spring 2011 62

63 -pruning: example 4 MAX MIN [7,6][7,6] [6, ∞] Selected move [-∞,14][-∞,7] [-∞,6] 1479 CS561 - Lecture 7-8 - Macskassy - Spring 2011 63

64 -pruning: general principle Player If> v then MAX will chose m so prune tree under n Similar forfor MIN Player CS561 - Lecture 7-8 - Macskassy - Spring 2011 64 Opponent m n v

65 The α – β algorithm

66 Properties α – β algorithm

67 Resource limits  Standard approach: ◦Use C UTOFF -T EST instead of T ERMINAL -T EST  e.g., depth limit (perhaps add quiescence search) ◦Use E VAL instead of U TILITY  i.e., evaluation function that estimates desirability of position  Suppose we have 100 seconds, and can explore 10 4 nodes/second  10 6 nodes per move ≈ 35 8/2  α – β reaches depth 8  pretty good chess program

68 Evaluation Functions Function which scores non-terminals  Ideal function: returns the utility of the position  In practice: typically weighted linear sum of features:  e.g. f 1 ( s ) = (num white queens – num black queens), etc.

69 Digression: Exact values don't matter Behavior is preserved under any monotonic transformation of Eval Only the order matters: payoff in deterministic games acts as an ordinal utility function

70 CS561 - Lecture 7-8 - Macskassy - Spring 2011 70  Dice rolls increase b: 21 possible rolls with 2 dice ◦Backgammon20 legal moves ◦ Depth 4 = 20 x (21 x 20) 3 1.2 x 10 9  As depth increases, probability of reaching a given node shrinks ◦So value of lookahead is diminished ◦So limiting depth is less damaging ◦-pruning is much less effective  TDGammon uses depth-2 search + very good E VAL function + reinforcement learning ◦ world-champion level play STOCHASTIC GAMES

71 Nondeterministic games in general In nondeterministic games, chance introduced by dice, card-shuffling Simplified example with coin-flipping:

72 Algorithm for nondeterministic games Expectiminimax gives perfect play Just like Minimax, except we must also handle chance nodes: … if state is a Max node then return the highest ExpectiMinimax-Value of Successors(state) if state is a Min node then return the lowest ExpectiMinimax-Value of Successors(state) if state is a chance node then return average of ExpectiMinimax-Value of Successors(state) …

73 Expectiminimax

74 Digression: Exact values DO matter Behavior is preserved only by positive linear transformation of Eval Hence Eval should be proportional to the expected payoff

75 Games of imperfect information E.g., card games, where opponent's initial cards are unknown Typically we can calculate a probability for each possible deal Seems just like having one big dice roll at the beginning of the game Idea: compute the minimax value of each action in each deal, then choose the action with highest expected value over all deals Special case: if an action is optimal for all deals, it's optimal. GIB, current best bridge program, approximates this idea by 1) generating 100 deals consistent with bidding information 2) picking the action that wins most tricks on average

76 Example Four-card bridge/whist/hearts hand, Max to play first

77 Commonsense example

78 Proper analysis * Intuition that the value of an action is the average of its values in all actual states is WRONG With partial observability, value of an action depends on the information state or belief state the agent is in Can generate and search a tree of information states Leads to rational behaviors such as Acting to obtain information Signalling to one's partner Acting randomly to minimize information disclosure


Download ppt "Adversarial Search. Game playing Perfect play The minimax algorithm alpha-beta pruning Resource limitations Elements of chance Imperfect information."

Similar presentations


Ads by Google