Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Implementation of Machine Learning in the Game of Checkers

Similar presentations


Presentation on theme: "The Implementation of Machine Learning in the Game of Checkers"— Presentation transcript:

1 The Implementation of Machine Learning in the Game of Checkers
Billy Melicher Computer Systems lab 08 10/29/08 1

2 Abstract Machine learning uses past information to predict future states Can be used in any situation where the past will predict the future Will adapt to situations 2

3 Introduction Checkers is used to explore machine learning
Checkers has many tactical aspects that make it good for studying 3

4 Background Minimax Heuristics Learning 4

5 Minimax Method of adversarial search
Every pattern(board) can be given a fitness value(heuristic) Each player chooses the outcome that is best for them from the choices they have 5

6 Minimax Gotten from wiki 6

7 Minimax Has exponential growth rate
Can only evaluate a certain number of actions into the future – ply 7

8 Heuristic Heuristics predict out come of a board
Fitness value of board, higher value, better outcome Not perfect Requires expertise in the situation to create 8

9 Heuristics H(s) = c0F0(s) + c1F1(s) + … + cnFn(s) H(s) = heuristic
Has many different terms In checkers terms could be: Number of checkers Number of kings Number of checkers on an edge How far checkers are on board 9

10 Learning by Rote Stores every game played
Connects the moves made for each board Relates the moves made from a particular board to the outcome of the board More likely to make moves that result in a win, less likely to make moves resulting in a loss Good in end game, not as good in mid game 10

11 How I store data I convert each checker board into a 32 digit base 5 number where each digit corresponds to a playable square and each number corresponds to what occupies that square.

12 Learning by Generalization
Uses a heuristic function to guide moves Changes the heuristic function after games based on the outcome Good in mid game but not as good in early and end games Requires identifying the features that affect game 12

13 Development Use of minimax algorithm with alpha beta pruning
Use of both learning by Rote and Generalization Temporal difference learning 13

14 Temporal Difference Learning
In temporal difference learning, you adjust the heuristic based on the difference between the heuristic at one time and at another Equilibrium moves toward ideal function U(s) <-- U(s) + α( R(s) + γU(s') - U(s)) 14

15 Temporal Difference Learning
No proof that prediction closer to the end of the game will be better but common sense says it is Changes heuristic so that it better predicts the value of all boards Adjusts the weights of the heuristic

16 Alpha Value The alpha value decreases the change of the heuristic based on how much data you have Decreasing returns Necessary for ensuring rare occurrences do not change heuristic too much

17 Results Value of weight reaches equilibrium
Changes to reflect the learning of the program Occasionally requires programmer intervention when it reaches a false equilibrium

18 Results During the course of a game the value of this particular weight centers around 10.

19 Results Learning by rote requires a large data set
Requires large amounts of memory Necessary for determining alpha value in temporal difference learning

20 Results Learning by rote does increase with the number of games but has decreasing returns and large amounts of memory


Download ppt "The Implementation of Machine Learning in the Game of Checkers"

Similar presentations


Ads by Google