CHINESE CHECKERS – IN THE LIGHT OF AI Computer Game Playing 1 Under the Guidance of Prof. Pushpak Bhattacharyya Presented by: Anwesha Das, Prachi Garg.

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
CMSC 671 Fall 2001 Class #8 – Thursday, September 27.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Adversarial Search CSE 473 University of Washington.
G5AIAI Introduction to AI Graham Kendall Game Playing Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Adversarial Search Chapter 6.
Artificial Intelligence for Games Game playing Patrick Olivier
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
CMSC 463 Chapter 5: Game Playing Prof. Adam Anthony.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
This time: Outline Game playing The minimax algorithm
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Notes on Game Playing by Yun Peng of theYun Peng University of Maryland Baltimore County.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Artificial Intelligence
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
PENGANTAR INTELIJENSIA BUATAN (64A614)
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Adversarial Search.
Pruned Search Strategies
Game Playing: Adversarial Search
Game Playing Chapter 5.
Adversarial Search CS 171/271 (Chapter 6)
Adversarial Search Chapter 6 Section 1 – 4.
Unit II Game Playing.
Presentation transcript:

CHINESE CHECKERS – IN THE LIGHT OF AI Computer Game Playing 1 Under the Guidance of Prof. Pushpak Bhattacharyya Presented by: Anwesha Das, Prachi Garg and Asmita Sharma

OUTLINE 1.BACKGROUND 2.HISTORY OF CHINESE CHECKERS 3. INTRODUCTION - GAME DESCRIPTION 4. AI – PERSPECTIVE 1. MINIMAX ALGORITHM 2. ALPHA – BETA PRUNING 5. HEURISTICS 7. CONCLUSION 8. REFERENCES

Computer Games & AI Computer Games & AI  Game playing was one of the earliest researched AI problems since  games seem to require intelligence (heuristics)  state of games are easy to represent  only a restricted number of actions are allowed and outcomes are defined by precise rules  It has been studied for a long time  Babbage (tic-tac-toe)  Turing (chess)  Sole Motive is to get a game solved, meaning that the entire game tree can be built and outcome of the game can be determined even before the game is started. 3

Class of Chinese Checkers Class of Chinese Checkers 4 It falls under the category of zero-sum discrete finite deterministic games of perfect information.  Zero-sum: In the outcome of any game, Player A’s gain equals player B’s loss.  Discrete: All game states and decisions are discrete values.  Finite: Only a finite number of states and decisions.  Deterministic: No chance (no die rolls).  Perfect information: Both players can see the state, and each decision is made sequentially (no simultaneous moves).

 Chinese checker was invented in Germany under the name of “stern-halma”.  It is a variation of “Halma” which is Greek for “Jump”, invented by an American professor from Boston, Dr. George Howard Monks ( ) between 1883 and History of Chinese Checkers

 Developed by Chinook  Used alpha-beta search  Used a pre computed perfect end game by means of a database  In 1992 Chinook won the US Open  ….. And challenged for the World Championship 6 History of Chinese Checkers

 Dr Marion Tinsley had been the world champion for over 40 years  … only losing three games in all that time  Against Chinook he suffered his fourth and fifth defeat  ….. But ultimately won 21.5 to 18.5  In August 1994 there was a re-match but Marion Tinsley withdrew for health reasons  Chinook became the official world champion 7

 Chinook did not use any learning mechanism.  Kumar in Learning was done using a neural network with the synapses being changed by an evolutionary strategy.  The best program beat a commercial application 6-0  The program was presented at CEC 2000 (San Diego) and remain undefeated 8 History of Chinese Checkers

9 The game can be defined as a kind of search problem with the following components : The game can be defined as a kind of search problem with the following components : 1. A finite set of states. 2. The initial state: board position, indication of whose move it is 3. A set of operators: define the legal moves that a player can make 4. A terminal test: determines when the game is over (terminal states) 5. A utility (payoff) function: gives a numeric value for the outcome of a game (-1,+1,0) ? Components of the Game

Introduction – How to play? Introduction – How to play?  2 to 6 players can play the game,having 10 same- colored marbles  At the start - player's marbles are in the ten holes of the star point that has the same color as his marbles  Goal - move all marbles of one color from starting point to the star point on the opposite side of the board  No game pieces are removed from the board.

Introduction - Constraints  marble can move by rolling to a hole next to it  by jumping over one marble, of any color, to a free hole, along the lines connecting the holes in a hexagonal pattern  several jumps in a row, but only one roll  cannot both roll the marble and jump with it at the same turn

Minimax  John von Neumann outlined a search method (Minimax) that maximize your position whilst minimizing your opponent’s  Minimax searches state-space using the following assumptions – your opponent is ‘as clever’ as you – if your opponent can make things worse for you, they will take that move – your opponent won’t make mistake 12

13 DEFG = agent = opponent B C A Minimax - Example H I J K L M N O MAX MIN MAX

Minimax to a Fixed Ply-Depth Minimax to a Fixed Ply-Depth 14  Usually not possible to expand a game to end-game status  have to choose a ply-depth that is achievable with reasonable time and resources  absolute ‘win-lose’ values become heuristic scores  heuristics are devised according to knowledge of the game

15 DEFG = agent = opponent B C A Minimax - Example H I J K L M N O MAX MIN MAX

Alpha-Beta Pruning Alpha-Beta Pruning 16  Fixed-depth Minimax searches entire space down to a certain level, in a breadth-first fashion. Then backs values up. Some of this is wasteful search  Alpha-Beta pruning identifies paths which need not be explored any further

Alpha-Beta Pruning Alpha-Beta Pruning 17  Traverse the search tree in depth-first order  At each MAX node n, alpha(n) = maximum value found so far  At each MIN node n, beta(n) = minimum value found so far  Note: The alpha values start at -infinity and only increase, while beta values start at +infinity and only decrease.  Beta cutoff: Given a MAX node n, cut off the search below n (i.e., don’t generate or examine any more of n’s children) if alpha(n) >= beta(i) for some MIN node ancestor i of n.  Alpha cutoff: stop searching below MIN node n if beta(n) <= alpha(i) for some MAX node ancestor i of n.

18 A BC DE 658 6>=8 <=6 HIJK = agent= opponent MAX MIN MAX Alpha-beta Pruning

19 A BC DEFG 658 6>=8 6 HIJKLM = agent= opponent 21 2 <=2 >=6 MAX MIN MAX Alpha-beta Pruning

20 A BC DEFG 658 6>=8 6 HIJKLM = agent= opponent >=6 MAX MIN MAX Alpha-beta Pruning

21 A BC DEFG 658 6>=8 6 HIJKLM = agent= opponent beta cutoff alpha cutoff Alpha-beta Pruning MAX MIN MAX

We use α-β pruning, which can optimize move. Here white balls may take 20 & black 22 moves - α-β pruning chooses the better move.

Heuristics Heuristics 23 Chinese Checker, possible heuristics :- Chinese Checker, possible heuristics :-  Random- the computer makes a move randomly without taking into consideration the current board configuration  Caveat - the random player may not leave its triangle at all - denying the chance to the opponent to occupy its winning triangle  Vertical Displacement(VD)- generates all moves with minimax algorithm & sums up the vertical distance for its pieces and adds them. Same done for the opponent.  Static Evaluation Function=total self - total opponent  Vertical / Horizontal Displacement (HD)- considers the horizontal pieces as well in deciding the next move.  It is best to play in the middle of the board.  Formula is W.F*VD+HD where W.F=weight factor which decides how much of importance is to be given to keep the pieces in the middle compared to jumping vertically

Heuristics  Vertical/Horizontal Displacement with split – move the pieces to the edges of the destination triangle once they have moved in  Create space for others to come in  Back piece moving strategy – give more weightage to the moves where the back pieces move forward than the front pieces  Pieces move in clusters.  Concept of disuse wt.

Optimization 1.Consider those nodes of minimax algorithm which show some progress o Speed improves significantly 2. Shortest depth first in case of a win o Path which takes to the goal faster 3.Expand nodes with maximum value to improve α β pruning 4. Sort to increase the gain of α β pruning Tradeoff is O(n 2 ) sorting algorithm versus O(n) best move.

26  When inevitable bad moves are put off beyond the cutoff on the depth of lookahead, e.g. stalling moves by pushing an unfavorable outcome “over the search horizon” to a place where it cannot be detected  The unavoidable damaging move is thus not dealt with  Use of singular extensions where one “clearly better” move is searched beyond the normal depth limit without incurring more cost Horizon Effect Horizon Effect

27  Always search to limited depth - blind to states just beyond this boundary  Might chose a path on the basis that it terminates in a good heuristic … but next move could be catastrophic  Overcome by quiescence search – positions in which favourable captures are avoided by extending search depth to a quiescent position. Quiescence Search

Chinese Checker How to efficiently move to goal?

Make intelligent moves Depicting the board configuration of a 2 player game. Back piece moving strategy – moving in clusters

No pieces to leave behind- Horizon effect Preference given to the backward, edge pieces among the rest Although less powerful, push the backends

Identify the correct move Best to play in the middle, make the best move All possible moves from a particular piece

Conclusion  Board Game – a wonderful tool to reinforce learning  Study done on the Chinese Checker distance, regular polygons on checker plane, its variations of the Pythagorean theorem.  Both minimax and alpha-beta pruning assume perfect play from the opposition.  Increase in processing power will not always make exhaustive search of game tree possible – need pruning techniques to increase the search depth with the same computing resources 32

References Reading material from whitneybabcockmcconnell.com of CMU. na.pdf - Master’s thesis by Paula Ulfhake, Lund University na.pdf ckers ckers Rich and Knight 33

THANK YOU