Download presentation

Presentation is loading. Please wait.

Published byJamir Baggerly Modified over 2 years ago

1
Intelligence Artificial Intelligence Ian Gent Games 1: Game Tree Search

2
Intelligence Artificial Intelligence Part I :Game Trees Part II: MiniMax Part III: A bit of Alpha-Beta Game Tree Search

3
3 Perfect Information Games zUnlike Bridge, we consider 2 player perfect information games zPerfect Information: both players know everything there is to know about the game position yno hidden information (e.g. opponents hands in bridge) yno random events (e.g. draws in poker) ytwo players need not have same set of moves available yexamples are Chess, Go, Checkers, O’s and X’s zGinsberg made Bridge 2 player perfect information yby assuming specific random locations of cards ytwo players were North-South and East-West

4
4 Game Trees zA game tree is like a search tree ynodes are search states, with full details about a position xe.g. chessboard + castling/en passant information yedges between nodes correspond to moves yleaf nodes correspond to determined positions xe.g. Win/Lose/Draw xnumber of points for or against player yat each node it is one or other player’s turn to move

5
5 Game Trees Search Trees zStrong similarities with 8s puzzle search trees ythere may be loops/infinite branches ytypically no equivalent of variable ordering heuristic x“variable” is always what move to make next zOne major difference with 8s puzzle yThe key difference is that you have an opponent! zCall the two players Max and Min yMax wants leaf node with max possible score xe.g. Win = + yMin wants leaf node with min score, xe.g. Lose = -

6
6 The problem with Game trees zGame trees are huge yO’s and X’s not bad, just 9! = 362,880 yCheckers/Draughts about yChess about yGo utterly ludicrous, e.g. 361! zRecall from Search1 Lecture, yIt is not good enough to find a route to a win yHave to find a winning strategy yUnlike 8s/SAT/TSP, can’t just look for one leaf node xtypically need lots of different winning leaf nodes yMuch more of the tree needs to be explored

7
7 Coping with impossibility zIt is usually impossible to solve games completely yConnect 4 has been solved yCheckers has not been xwe’ll see a brave attempt later zThis means we cannot search entire game tree ywe have to cut off search at a certain depth xlike depth bounded depth first, lose completeness zInstead we have to estimate cost of internal nodes zDo so using a static evaluation function

8
8 Static evaluation zA static evaluation function should estimate the true value of a node ytrue value = value of node if we performed exhaustive search yneed not just be /0/- even if those are only final scores ycan indicate degree of position xe.g. nodes might evaluate to +1, 0, -10 zChildren learn a simple evaluation function for chess yP = 1, N = B = 3, R = 5, Q = 9, K = 1000 yStatic evaluation is difference in sum of scores ychess programs have much more complicated functions

9
9 O’s and X’s zA simple evaluation function for O’s and X’s is: yCount lines still open for maX, ySubtract number of lines still open for min yevaluation at start of game is 0 yafter X moves in center, score is +4 zEvaluation functions are only heuristics ye.g. might have score -2 but maX can win at next move xO - X x- O X x- - - zUse combination of evaluation function and search

10
10 MiniMax zAssume that both players play perfectly yTherefore we cannot optimistically assume player will miss winning response to our moves zE.g. consider Min’s strategy ywants lowest possible score, ideally - ybut must account for Max aiming for + yMin’s best strategy is: xchoose the move that minimises the score that will result when Max chooses the maximising move yhence the name MiniMax zMax does the opposite

11
11 Minimax procedure zStatically evaluate positions at depth d zFrom then on work upwards zScore of max nodes is the max of child nodes zScore of min nodes is the min of child nodes zDoing this from the bottom up eventually gives score of possible moves from root node yhence best move to make zCan still do this depth first, so space efficient

12
12 What’s wrong with MiniMax zMinimax is horrendously inefficient zIf we go to depth d, branching rate b, ywe must explore b d nodes zbut many nodes are wasted zWe needlessly calculate the exact score at every node zbut at many nodes we don’t need to know exact score ze.g. outlined nodes are irrelevant

13
13 Alpha-Beta search zAlpha-Beta = zUses same insight as branch and bound zWhen we cannot do better than the best so far ywe can cut off search in this part of the tree zMore complicated because of opposite score functions zTo implement this we will manipulate alpha and beta values, and store them on internal nodes in the search tree

14
14 Alpha and Beta values zAt a M x node we will store an alpha value ythe alpha value is lower bound on the exact minimax score ythe true value might be yif we know Min can choose moves with score < xthen Min will never choose to let Max go to a node where the score will be or more zAt a Min node, we will store a beta value ythe beta value is upper bound on the exact minimax score ythe true value might be zAlpha-Beta search uses these values to cut search

15
15 Alpha Beta in Action zWhy can we cut off search? zBeta = 1 < alpha = 2 where the alpha value is at an ancestor node zAt the ancestor node, Max had a choice to get a score of at least 2 (maybe more) zMax is not going to move right to let Min guarantee a score of 1 (maybe less)

16
16 Summary and Next Lecture zGame trees are similar to search trees ybut have opposing players zMinimax characterises the value of nodes in the tree ybut is horribly inefficient zUse static evaluation when tree too big zAlpha-beta can cut off nodes that need not be searched zNext Time: More details on Alpha-Beta

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google