Chess and AI Group Members Abhishek Sugandhi 04305016 Sanjeet Khaitan 04305018 Gautam Solanki 04305027.

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Advertisements

Anthony Cozzie. Quite possibly the nerdiest activity in the world But actually more fun than human chess Zappa o alpha-beta searcher, with a lot of tricks.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
"Programming a Computer to Play Chess" A presentation of C. Shannon's 1950 paper by Andrew Oldag April
Application of Artificial intelligence to Chess Playing Capstone Design Project 2004 Jason Cook Bitboards  Bitboards are 64 bit unsigned integers, with.
Hoe schaakt een computer? Arnold Meijster. Why study games? Fun Historically major subject in AI Interesting subject of study because they are hard Games.
Artificial Intelligence for Games Game playing Patrick Olivier
Artificial Intelligence in Game Design Heuristics and Other Ideas in Board Games.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Artificial Intelligence in Game Design
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
Games and adversarial search
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
MAE 552 – Heuristic Optimization Lecture 28 April 5, 2002 Topic:Chess Programs Utilizing Tree Searches.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Group 1 : Ashutosh Pushkar Ameya Sudhir From. Motivation  Game playing was one of the first tasks undertaken in AI  Study of games brings us closer.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
CISC 235: Topic 6 Game Trees.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Prepared by : Walaa Maqdasawi Razan Jararah Supervised by: Dr. Aladdin Masri.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Games as Game Theory Systems (Ch. 19). Game Theory It is not a theoretical approach to games Game theory is the mathematical study of decision making.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Instructor: Vincent Conitzer
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
PYIWIT'021 Threat Analysis to Reduce the Effects of the Horizon Problem in Shogi Reijer Grimbergen Department of Information Science Saga University.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
Parallel Programming in Chess Simulations Tyler Patton.
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Adversarial Search 2 (Game Playing)
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
EA C461 – Artificial Intelligence Adversarial Search
Instructor: Vincent Conitzer
#1 Chess Engine of Spring 2017 at S&T
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Adversarial Search Chapter 5.
Dakota Ewigman Jacob Zimmermann
Instructor: Vincent Conitzer
The Alpha-Beta Procedure
Mini-Max search Alpha-Beta pruning General concerns on games
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Chess and AI Group Members Abhishek Sugandhi Sanjeet Khaitan Gautam Solanki

What We Need Some way to represent a chess board in memory. A technique to choose the move to make amongst all legal possibilities. A way to compare moves and positions, so that it makes intelligent choices.

Board Representation: Memory was at much more of a premium several decades ago than it is today. So early chess programs laid a strong foundation for efficient and compact data structures. The most obvious board representation is a 64 byte array, with one byte representing one of the 64 squares on the board.

Bit Boards: An enhancement to board representation that is still prevalent today is the bit board. This is a 64 bit word that holds a binary value for each square This is a 64 bit word that holds a binary value for each square Greatly reduces computation time for the program. For example KnightMoves[c2] can be represented as

BitBoards contd. Simialrly WhitePieces can be This gives all the positions where there are white pieces on the board. So an AND operation of KightMoves[c2] and NOT of Whitepieces will give us all the White Knight moves from c2.

Searching: The most basic search algorithm is minimax. This can be enhanced by alpha beta pruning, especially when combined with iterative deepening.

Minimax Computer selects move with maximum benefit and assumes that the opponent will take move with minimum loss Computer selects move with maximum benefit and assumes that the opponent will take move with minimum loss

Alpha-Beta Pruning The problem with minimax is that we expand every node down to a certain depth even though, in certain cases we are wasting our time. To combat this a procedures known as alpha/beta (it is called this for historical reasons) pruning has been developed.

Iterative Deepening In many occasion there is a time limit in which the computer should decide a the move. The idea behind iterative deepening is : begin by searching all moves arising from the position to depth 2, search again to depth 3, and then depth 4 and so on, until you have reached the appropriate depth or the time limit is over. Even if the time limit is over before the appropriate depth is reached we still have some good estimate of a move.

Transposition Table This is equivalent to a hash table that holds board positions that have already been analyzed in the past to save the work already done by the engine. An “opening book”, which is a database of solid opening moves, is almost always included in the transposition table. The transposition table also helps out a lot in the end game, where with most pieces off the board, the move tree will generate many identical positions through different move lists. While nearly every modern chess program has a transposition table, the one disadvantage is that it takes up a huge amount of memory. Despite this, the transposition table shows how a little pre- processing can simplify computations during the actual game. Despite this, the transposition table shows how a little pre- processing can simplify computations during the actual game.

Horizon Effect: A problem with these simple algorithms is that the chess program can only scan to a certain depth. For example, if capturing the opponent’s queen on ply 7 results in getting checkmated on ply 8, and the program only searches 7 plies deep, it will think the queen capture is a great move. This is known as the “horizon effect”.

Quiescence Search: This can be used to avoid Horizon effect. The whole search process is cotinued until a “quiet position” is reached. The whole search process is cotinued until a “quiet position” is reached. A “quiet position” is a position where the computer is not captured by any piece of the opponent by any further move.

Aspirated Search: This is used to reduce the search time by cutting off more nodes. In Alpha-Beta Pruning we take -INFINITY and +INFINITY as initial value of Alpha and Beta resp. The idea behind Aspirated Search is to reduce the window size of -INFINITY to +INFINITY, to a smaller window. This technique can be effectively used with Iterative Deepening. Once we have searched for 2-ply, we get an approximate value for the result of 3ply search. Let the result is A. so when initiating 3 ply search we can give alpha and beta values as A- Constant and A+Constant resp. This will reduce the search time.

Singular Extensions: Another way to combat the horizon effect and also improve accuracy is to add another several ply when a particularly promising move is discovered during search. This can quickly determine whether the move’s high valuation is legitimate. Searching through extra plies like this can be an expensive operation and was one of the main features of IBM’s Deep Blue.

Position Evaluation Static evaluation of a board position depends on several factors. The most important is Material Balance (Pieces on chess- board). Mobility is another factor on which Position Evaluation depends Board-Control is another measurable feature of a position

Position Evaluation (Continued): ● Material balance is an account of which pieces are on the board for each side. ● According to chess literature, a queen may be worth 900 points, a rook 500, a bishop 325, a knight 300 and a pawn 100; the king has infinite value. ● Computing material balance is therefore a simple matter: a side's material value is equal to MB = Sum( Np * Vp ) Np is the number of pieces of a certain type on the board Vp is that piece's value. ● If you have more material on the board than your opponent, you are in good shape.

Mobility It can be defined by the number of legal moves a player has. It can be defined by the number of legal moves a player has. A player having more mobility is considered to be in better shape.

Board Control Board-Control is another measurable feature of a position. It can be defined by whether or not the king is protected (i.e. by castling) It can be defined by whether or not the king is protected (i.e. by castling) how many rooks are on open files how many central squares on the board are being occupied and attacked. Doubled and tripled pawns (two or three pawns in the same file via a capture) are known to be weak positions Isolated pawns (a pawn which doesn’t have another pawn in either of it’s bordering files) also denote weak positions.

Linear Evaluation Functions A linear function of f1, f2, f3 is a weighted sum of f1, f2, f3.... A linear evaluation function is = w1.f1 + w2.f2 + w3.f wn.fn where f1, f2, f3, are the features like board control, mobility etc. and w1, w2, w3, are the weights Idea: – more important features get more weight The quality of play will depend directly on the quality of the evaluation function to build an evaluation function we have to – 1. construct good features (using expert knowledge, heuristics) – 2. pick good weights (can be learned)

Conclusion Chess and game tree AI have come a long way since when they first began. The best evidence of is that the greatest chess player in the world, Garry Kasparov, was defeated by a machine that used principles based primarily upon many of the topics discussed in previous slides. As computer architecture improves coupled with stronger, more efficient evolving algorithms, we can only expect this classical type of brute force, heuristic AI to become even stronger.

References

Project Proposal We will create a One Player Chess Program using Iterative deepening Technique.