Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence 5. Game Playing Course V231 Department of Computing Imperial College © Simon Colton.

Similar presentations


Presentation on theme: "Artificial Intelligence 5. Game Playing Course V231 Department of Computing Imperial College © Simon Colton."— Presentation transcript:

1 Artificial Intelligence 5. Game Playing Course V231 Department of Computing Imperial College © Simon Colton

2 Two Player Games Competitive rather than cooperative – One player loses, one player wins Zero sum game – One player wins what the other one loses – See game theory for the mathematics Getting an agent to play a game – Boils down to how it plays each move – Express this as a search problem Cannot backtrack once a move has been made (episodic)

3 (Our) Basis of Game Playing: Search for best move every time Initial Board State Board State 2 Board State 3 Board State 4 Board State 5 Search for Opponent Move 1 Moves Search for Opponent Move 3 Moves

4 Lookahead Search If I played this move – Then they might play that move Then I could do that move – And they would probably do that move – Or they might play that move Then I could do that move – And they would play that move Or I could play that move – And they would do that move If I played this move…

5 Lookahead Search (best moves) If I played this move – Then their best move would be Then my best move would be – Then their best move would be – Or another good move for them is… Then my best move would be – Etc.

6 Minimax Search Like children sharing a cake Underlying assumption – Opponent acts rationally Each player moves in such a way as to – Maximise their final winnings, minimise their losses – i.e., play the best move at the time Method: – Calculate the guaranteed final scores for each move Assuming the opponent will try to minimise that score – Choose move that maximises this guaranteed score

7 Example Trivial Game Deal four playing cards out, face up Player 1 chooses one, player 2 chooses one – Player 1 chooses another, player 2 chooses another And the winner is…. – Add the cards up – The player with the highest even number Scores that amount (in pounds sterling from opponent)

8 For Trivial Games Draw the entire search space Put the scores associated with each final board state at the ends of the paths Move the scores from the ends of the paths to the starts of the paths – Whenever there is a choice use minimax assumption – This guarantees the scores you can get Choose the path with the best score at the top – Take the first move on this path as the next move

9 Entire Search Space

10 Moving the scores from the bottom to the top

11 Moving a score when theres a choice Use minimax assumption – Rational choice for the player below the number youre moving

12 Choosing the best move

13 For Real Games Search space is too large – So we cannot draw (search) the entire space For example: chess has branching factor of ~35 – Suppose our agent searches 1000 board states per second – And has a time limit of 150 seconds So can search 150,000 positions per move – This is only three or four ply look ahead Because 35 3 = 42,875 and 35 4 = 1,500,625 – Average humans can look ahead six-eight ply

14 Cutoff Search Must use a heuristic search Use an evaluation function – Estimate the guaranteed score from a board state Draw search space to a certain depth – Depth chosen to limit the time taken Put the estimated values at the end of paths Propagate them to the top as before Question: – Is this a uniform path cost, greedy or A* search?

15 Evaluation Functions Must be able to differentiate between – Good and bad board states – Exact values not important – Ideally, the function would return the true score For goal states Example in chess – Weighted linear function – Weights: Pawn=1, knight=bishop=3, rook=5, queen=9

16 Example Chess Score Black has: – 5 pawns, 1 bishop, 2 rooks Score = 1*(5)+3*(1)+5*(2) = = 18 White has: – 5 pawns, 1 rook Score = 1*(5)+5*(1) = = 10 Overall scores for this board state: black = = 8 white = = -8

17 Evaluation Function for our Game Evaluation after the first move – Count zero if its odd, take the number if its even Evaluation function here would choose 10 – But this would be disastrous for the player

18 Problems with Evaluation Functions Horizon problem – Agent cannot see far enough into search space Potentially disastrous board position after seemingly good one Possible solution – Reduce the number of initial moves to look at Allows you to look further into the search space Non-quiescent search – Exhibits big swings in the evaluation function – E.g., when taking pieces in chess – Solution: advance search past non-quiescent part

19 Pruning Want to visit as many board states as possible – Want to avoid whole branches (prune them) Because they cant possibly lead to a good score – Example: having your queen taken in chess (Queen sacrifices often very good tactic, though) Alpha-beta pruning – Can be used for entire search or cutoff search – Recognize that a branch cannot produce better score Than a node you have already evaluated

20 Alpha-Beta Pruning for Player 1 1.Given a node N which can be chosen by player one, then if there is another node, X, along any path, such that (a) X can be chosen by player two (b) X is on a higher level than N and (c) X has been shown to guarantee a worse score for player one than N, then the parent of N can be pruned. 2. Given a node N which can be chosen by player two, then if there is a node X along any path such that (a) player one can choose X (b) X is on a higher level than N and (c) X has been shown to guarantee a better score for player one than N, then the parent of N can be pruned.

21 Example of Alpha-Beta Pruning Prune player 1 player 2 Depth first search a good idea here – See notes for explanation

22 Games with Chance Many more interesting games – Have an element of chance – Brought in by throwing a die, tossing a coin Example: backgammon – See Gerry Tesauros TD-Gammon program In these cases – We can no longer calculate guaranteed scores – We can only calculate expected scores Using probability to guide us

23 Expectimax Search Going to draw tree and move values as before Whenever there is a random event – Add an extra node for each possible outcome which will change the board states possible after the event – E.g., six extra nodes if each roll of die affects state Work out all possible board states from chance node When moving score values up through a chance node – Multiply the value by the probability of the event happening Add together all the multiplicands – Gives you expected value coming through the chance node

24 More interesting (but still trivial) game Deal four cards face up Player 1 chooses a card Player 2 throws a die – If its a six, player 2 chooses a card, swaps it with player 1s and keeps player 1s card – If its not a six, player 2 just chooses a card Player 1 chooses next card Player 2 takes the last card

25 Expectimax Diagram

26 Expectimax Calculations

27 Games Played by Computer Games played perfectly: – Connect four, noughts & crosses (tic-tac-toe) – Best move pre-calculated for each board state Small number of possible board states Games played well: – Chess, draughts (checkers), backgammon – Scrabble, tetris (using ANNs) Games played badly: – Go, bridge, soccer

28 Philosophical Questions Q1. Is how computers plays chess – More fundamental than how people play chess? In science, simple & effective techniques are valued – Minimax cutoff search is simple and effective – But this is seen by some as stupid and non-AI Drew McDermott: – "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings Q2. If aliens came to Earth and challenged us to chess… – Would you send Deep Blue or Kasparov into battle?


Download ppt "Artificial Intelligence 5. Game Playing Course V231 Department of Computing Imperial College © Simon Colton."

Similar presentations


Ads by Google