Mario AI Championship 2012 The 2012 Mario AI Championship, the successor to the successful 2011 and 2010 Mario AI Competition, run in association with several major international conferences focusing on computational intelligence and games.
Our project gets inspiration from the competition: The goal is to develop a best controller(agent) to play the game. By using artificial neural network.
Neural networks are used for modeling complex relationships between inputs and outputs or to find patterns in data. Noise and error Tolerance. ANN can perform tasks that a linear program can not. When an element of the neural network fails, it can continue without any problem by their parallel nature.
By learning the environment around Marino, the controller's job is to win as many levels as possible. Each time step the controller has to decide what action to take (left, right, jump etc). The linear world of Mario is filled with enemies, obstacles and power- ups. The problem the agent faces when playing a game is how to correctly interpret what it sees in the environment and decide on the best action to take.
Input & Output: Data set = input + output (one frame one data set) Every frame our script will scan Mario code and extract environment matrix as well as keyboard action one time and obtain one data set. For example, A3 minutes monitoring of playing the game, we can obtain 3*60*24=4320 data sets. Input=environment matrix + mario position Output=keyboard action(0,1) Left, right, up, down, shoot, jump 0 0 0 0 0 1 this is a jump action
Encapsulate pattern to Java Agent Encapsulate these patterns to Java style agents, so that it can be used in Super Mario source code.
Test ANN Agents Import ANN Java library to Super Mario code. Insert ANN agent for testing
First experiment : 13*19 + 1 + 6 13*19 : environment around Mario 1 : Marios state 6 : Human keyboard action (up, down, left, run, jump, fire) Doesnt work well: input data is too large
Second experiment : 10*10 + 1 + 2 10*10 : reduce the number of environment data 1 : Marios state 2 : Human keyboard action, but separate run & jump Doesnt work well: different for run & jump, but Mario sometimes needs to run and jump all together.
Third experiment : 10*10 + 1 + 2 10*10 : keep the environment data small but effective 1 : Marios state 2 : merge run & jump together Works better but not satisfied: too much data noise due to humans playing training data
Fourth experiment : 10*10 + 1 + 2 Realized that the process human playing the game as training data is critical for the experiment. With extra carefulness during humans playing.
How to obtain the best input/output pair? Input, remove decorative redundancy. Output, a good human player is critical. About hidden layer Tuning is the best way. Is ANN the best way of designing Mario agent? Not really. The major problem is we cannot obtain a sufficient and accuracy training data from human beings. Reinforcement learning maybe better.