Senior Project – Computer Science – 2015 Modelling Opponents in Board Games Julian Jocque Advisor – Prof. Rieffel Abstract Modelling opponents in a game.

Slides:



Advertisements
Similar presentations
Alpha-Beta Search. 2 Two-player games The object of a search is to find a path from the starting position to a goal position In a puzzle-type problem,
Advertisements

Genetic Algorithms (Evolutionary Computing) Genetic Algorithms are used to try to “evolve” the solution to a problem Generate prototype solutions called.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 15 Game Theory.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
AI for Connect-4 (or other 2-player games) Minds and Machines.
On the Genetic Evolution of a Perfect Tic-Tac-Toe Strategy
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Adversarial Search: Game Playing Reading: Chapter next time.
Application of Artificial intelligence to Chess Playing Capstone Design Project 2004 Jason Cook Bitboards  Bitboards are 64 bit unsigned integers, with.
Artificial Intelligence in Game Design Heuristics and Other Ideas in Board Games.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Artificial Intelligence in Game Design
CompSci Recursion & Minimax Playing Against the Computer Recursion & the Minimax Algorithm Key to Acing Computer Science If you understand everything,
Author: David He, Astghik Babayan, Andrew Kusiak By: Carl Haehl Date: 11/18/09.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Game Playing CSC361 AI CSC361: Game Playing.
MAE 552 – Heuristic Optimization Lecture 28 April 5, 2002 Topic:Chess Programs Utilizing Tree Searches.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Logical / Mathematical Intelligence
Ocober 10, 2012Introduction to Artificial Intelligence Lecture 9: Machine Evolution 1 The Alpha-Beta Procedure Example: max.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Evolutionary Algorithms BIOL/CMSC 361: Emergence Lecture 4/03/08.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Introduction CSE 1310 – Introduction to Computers and Programming Vassilis Athitsos University of Texas at Arlington 1.
Othello Artificial Intelligence With Machine Learning
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Introduction CSE 1310 – Introduction to Computers and Programming Vassilis Athitsos University of Texas at Arlington 1.
Mark Dunlop, Computer and Information Sciences, Strathclyde University 1 Algorithms & Complexity 5 Games Mark D Dunlop.
August th Computer Olympiad1 Learning Opponent-type Probabilities for PrOM search Jeroen Donkers IKAT Universiteit Maastricht.
Design of a real time strategy game with a genetic AI By Bharat Ponnaluri.
For Wednesday Read Weiss, chapter 12, section 2 Homework: –Weiss, chapter 10, exercise 36 Program 5 due.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
World Religions Tabulate Game. wild cards wild cards ordinary cards.
Bargaining as Constraint Satisfaction Simple Bargaining Game Edward Tsang
Backtracking and Games Eric Roberts CS 106B January 28, 2013.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Solving Tsumego on Computers M2 Hirokazu Ishii Chikayama & Taura Lab.
Introduction CSE 1310 – Introduction to Computers and Programming Vassilis Athitsos University of Texas at Arlington 1.
CompSci Backtracking, Search, Heuristics l Many problems require an approach similar to solving a maze ä Certain mazes can be solved using the.
Sporadic model building for efficiency enhancement of the hierarchical BOA Genetic Programming and Evolvable Machines (2008) 9: Martin Pelikan, Kumara.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
CPS Backtracking, Search, Heuristics l Many problems require an approach similar to solving a maze ä Certain mazes can be solved using the “right-hand”
HANGMAN OPTIMIZATION Kyle Anderson, Sean Barton and Brandyn Deffinbaugh.
Space Complexity. Reminder: P, NP classes P is the class of problems that can be solved with algorithms that runs in polynomial time NP is the class of.
CompSci Recursion & Minimax Recursion & the Minimax Algorithm Key to Acing Computer Science If you understand everything, ace your computer science.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
March 1, 2016Introduction to Artificial Intelligence Lecture 11: Machine Evolution 1 Let’s look at… Machine Evolution.
Stochastic tree search and stochastic games
Iterative Deepening A*
Alpha-Beta Search.
NIM - a two person game n objects are in one pile
Alpha-Beta Search.
Introduction to Artificial Intelligence Lecture 11: Machine Evolution
The Alpha-Beta Procedure
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Alpha-Beta Search.
Exploring Genetic Algorithms Through the Iterative Prisoner's Dilemma
Alpha-Beta Search.
Mini-Max search Alpha-Beta pruning General concerns on games
Alpha-Beta Search.
Unit II Game Playing.
Presentation transcript:

Senior Project – Computer Science – 2015 Modelling Opponents in Board Games Julian Jocque Advisor – Prof. Rieffel Abstract Modelling opponents in a game has many potential applications. The goal of opponent modelling is to build a model of an opponent to try to predict future moves the opponent will make. A possible application would be building a computer Chess player that plays similarly to famous chess players, even in game situations which that player had never seen before. To model opponents in board games I chose to use an algorithm called the Estimation Exploration Algorithm, hereafter referred to as the EEA. This algorithm has been shown to solve similar problems with a high degree of success. The algorithm works by creating and evolving a set of models and a set of tests then using the tests it created to iteratively increase the accuracy of the models. I created a system which uses the board game Konane and the EEA to model opponents. Abstract Modelling opponents in a game has many potential applications. The goal of opponent modelling is to build a model of an opponent to try to predict future moves the opponent will make. A possible application would be building a computer Chess player that plays similarly to famous chess players, even in game situations which that player had never seen before. To model opponents in board games I chose to use an algorithm called the Estimation Exploration Algorithm, hereafter referred to as the EEA. This algorithm has been shown to solve similar problems with a high degree of success. The algorithm works by creating and evolving a set of models and a set of tests then using the tests it created to iteratively increase the accuracy of the models. I created a system which uses the board game Konane and the EEA to model opponents. Approach Example Konane Board in starting position from I chose to approach this problem using the simple game of Konane. Konane is a game similar to checkers and as such is both easy for a computer to model and for a computer to play. The EEA is an evolutionary algorithm which continually evolves models and tests. The models for my approach are static evaluators for a player using the minimax strategy. The tests are board states for the opponent to respond to. The models are evolved to correctly predict the responses the opponent gives to the tests and the tests are evolved to maximize disagreement among the models. This disagreement is key as it minimizes the number of responses we need to get from the opponent because if a test causes great disagreement among a set of models, only the very best models will survive evolution. The fitness of the models is then judged by their ability to agree upon the moves made by the opponent for the evolved tests. This approach allows for very rapidly finding models which accurately predict moves an opponent makes. Approach Example Konane Board in starting position from I chose to approach this problem using the simple game of Konane. Konane is a game similar to checkers and as such is both easy for a computer to model and for a computer to play. The EEA is an evolutionary algorithm which continually evolves models and tests. The models for my approach are static evaluators for a player using the minimax strategy. The tests are board states for the opponent to respond to. The models are evolved to correctly predict the responses the opponent gives to the tests and the tests are evolved to maximize disagreement among the models. This disagreement is key as it minimizes the number of responses we need to get from the opponent because if a test causes great disagreement among a set of models, only the very best models will survive evolution. The fitness of the models is then judged by their ability to agree upon the moves made by the opponent for the evolved tests. This approach allows for very rapidly finding models which accurately predict moves an opponent makes. System The system was written entirely in Python. All code is original with the exception of the Konane board game and rules which came from Prof. Rieffel. The system works by following a modified EEA which is made to apply to board games: System The system was written entirely in Python. All code is original with the exception of the Konane board game and rules which came from Prof. Rieffel. The system works by following a modified EEA which is made to apply to board games: Evolve board states which maximize disagreement among models Get response to board states from opponent Update models’ fitness based on their agreement with the opponent’s moves Evolves models until they are sufficiently good at predicting opponent moves First, generate a set of random static evaluators to act as the models Results and Evaluation I was able to complete all objectives and get the system fully built. I was also able to get a large amount of data out of this system. One graph of the minimum, maximum and median fitness for a run of the system is below. Each black line indicates that the number of tests the models must accurately predict has increased. Although model fitness seems to stay the same it is actually rising to solve the harder and harder problem it is faced with. The actual accuracy of models was found to vary wildly across different runs of the system. When the settings of the models and the opponents were similar, I found upwards of 90% agreement among them. However, when the settings differed I didn’t find accuracy higher than 65%, which indicates there may be limits to how well this system can model an arbitrary opponent. Results and Evaluation I was able to complete all objectives and get the system fully built. I was also able to get a large amount of data out of this system. One graph of the minimum, maximum and median fitness for a run of the system is below. Each black line indicates that the number of tests the models must accurately predict has increased. Although model fitness seems to stay the same it is actually rising to solve the harder and harder problem it is faced with. The actual accuracy of models was found to vary wildly across different runs of the system. When the settings of the models and the opponents were similar, I found upwards of 90% agreement among them. However, when the settings differed I didn’t find accuracy higher than 65%, which indicates there may be limits to how well this system can model an arbitrary opponent.