A TIE IS NOT A LOSS Paul Adamiak T02 Aruna Meiyeppen T01.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
AI for Connect-4 (or other 2-player games) Minds and Machines.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Tic Tac Toe Architecture CSE 5290 – Artificial Intelligence 06/13/2011 Christopher Hepler.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Adversarial Search Board games. Games 2 player zero-sum games Utility values at end of game – equal and opposite Games that are easy to represent Chess.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Artificial Intelligence in Game Design
Ravneet Grewal & Joe Strathern CPSC 335 Assignment 5.
CPSC 335 Assignment 5 Leina Leung TUT 01 (Russel Apu) & Katy Lin TUT 02 (Jagoda Walny)
This time: Outline Game playing The minimax algorithm
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
1 DCP 1172 Introduction to Artificial Intelligence Lecture notes for Chap. 6 [AIMA] Chang-Sheng Chen.
How computers play games with you CS161, Spring ‘03 Nathan Sturtevant.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
CIS 310: Visual Programming, Spring 2006 Western State College 310: Visual Programming Othello.
Alpha-Beta Search. 2 Two-player games The object of a search is to find a path from the starting position to a goal position In a puzzle-type problem,
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Hex Combinatorial Search in Game Strategy by Brandon Risberg May 2006Menlo School.
PSU CS 370 – Introduction to Artificial Intelligence Game MinMax Alpha-Beta.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
CISC 235: Topic 6 Game Trees.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Minimax.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
 Summary  How to Play Go  Project Details  Demo  Results  Conclusions.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Agents that can play multi-player games. Recall: Single-player, fully-observable, deterministic game agents An agent that plays Peg Solitaire involves.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Mark Dunlop, Computer and Information Sciences, Strathclyde University 1 Algorithms & Complexity 5 Games Mark D Dunlop.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Instructor: Vincent Conitzer
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Connect Four AI Robert Burns and Brett Crawford. Connect Four  A board with at least six rows and seven columns  Two players: one with red discs and.
Minimax with Alpha Beta Pruning The minimax algorithm is a way of finding an optimal move in a two player game. Alpha-beta pruning is a way of finding.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Games 1 Alpha-Beta Example [-∞, +∞] Range of possible values Do DF-search until first leaf.
Cilk Pousse James Process CS534. Overview Introduction to Pousse Searching Evaluation Function Move Ordering Conclusion.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Parallel Programming in Chess Simulations Tyler Patton.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Game tree search As ever, slides drawn from Andrew Moore’s Machine Learning Tutorials: as well as from Faheim Bacchus.
The Standard Genetic Algorithm Start with a “population” of “individuals” Rank these individuals according to their “fitness” Select pairs of individuals.
Adversarial Search 2 (Game Playing)
Parallel Programming in Chess Simulations Part 2 Tyler Patton.
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Teaching Computers to Think:
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
PENGANTAR INTELIJENSIA BUATAN (64A614)
Game playing.
Kevin Mason Michael Suggs
NIM - a two person game n objects are in one pile
Data Structures and Algorithms
2019 SAIMC Puzzle Challenge General Regulations
Unit II Game Playing.
Presentation transcript:

A TIE IS NOT A LOSS Paul Adamiak T02 Aruna Meiyeppen T01

Strategy Check to see if we can win Check to see if opponent can win Check if we can make 4 in a row Check if opponent can make 4 in a row Make next best move root …Possible moves

Strategy Success The strategy didn’t work as well as hoped –Able to always beat a human user, rarely another program Defensive move is determined by a linear search of the board. –Handles a random threat which may not be the most threatening Chosen threat Most threatening

Favourite Move Filling space in between two threats Fill space

CPSC 335 Assignment 5 Leina Leung TUT 01 (Russel Apu) & Katy Lin TUT 02 (Jagoda Walny)

Explain what strategy you used and why. Alpha-Beta Pruning Why we use Alpha beta pruning? To eliminate non- interested branches of the search tree. Move-Ordering for every empty square:  If next move is a win or blocks opponent from winning then add it to the list  Else if the empty square is between 3 squares or adjacent to 3 connected squares of the same type then add it to the list  Else if this empty square makes the largest connected adjacent square of the same type then add it to the list Transposition Table: We used a transposition table to reduce calculation time. Since at any state of the board there are many different move orderings to get to that state we reduce our calculation of those same state by keeping the utility of that state.

Describe how well your strategy worked (use your overall ranking/score to support your claims), and explain why you think it performed the way it did. The strategy that we decided to use performed well on our part. We rank 3 rd in the competition. Mostly the move ordering is how we win and we only search 7 plies. Limitations: Due to the 3 seconds time limit and the inefficiency of our move ordering we were only able to search 7 plies deep. This only allowed us to look 4 moves in advanced and thus does not always makes the best move.

Describe how well your strategy worked (use your overall ranking/score to support your claims), and explain why you think it performed the way it did.

Our Best Move. Animation - FORK

THE END Thank you.

Group member Shan Chen Tutorial number: T01

Game Strategy Search algorithm: Alpha-beta pruning

How my strategy worked Since minimax search is depth-first, so I sort the possible moves by the value of evaluation function, so that nodes of possibly better outcome will be checked before timeout. My evaluation function did not work very well at the beginning, it made lots of mistakes at the first two days. Me vs. Team 4 : 12 : 200 Me vs. Team 10 : 62 : 90

Best move

CPSC 335 – Assignment 5 Tut 01 David Clarke Brady Lill

Algorithm used: Our algorithm that we decided to implement is a “Greedy” algorithm, because it was the simplest and most straight forward algorithm to implement. Our basic goal, in our algorithm was to produce a chain of 5 before the opponent does, and only block when the opponent has a chance to win. (ie: has already got a string >= 3)

We Lose More Than We Win One of the major problems with our algorithm, is that our blocking strategy is only invoked if we believe that we have no chance of winning. If we believe that we can win, we will not block. In effect, we are always being greedy and going for the win. One improvement that we could do to make our blocking strategy more effective, is to build a move tree, that will consist of all possible block moves, that will in tern produce a more effective blocking strategy.

Best Move: Our best move is the Opponent Block, where we attempt to block an opponent who is in a position to win (ie: has already got a chain of >= 3) by placing a block on either side of the chain. Problem, is that it only is able to place a block once. – ……………………………………………… – ……….xxx o ……………………………….. – ………………………….ooo……….……... – ……………………………………………… BLOCK

C-Blocking Vendetta, Defined Patrick Bick, Sarabjot Samra CPSC335 – Data Structures II Winter 2008

The Basics: Not about victory; assure opponent cannot win. Victory is incidental and essentially random

How it Works: Play for the bad side, but use the wrong symbol Plot most advantageous moves for opponent in the area, then steal them!

Why it Works: Every move opponent makes is taken as aggressive The “path to victory” is littered with our “wrong” moves

Funny Things: Works equally good (bad?) against random / greedy Not overly difficult to compute “Greedy” for the wrong reasons Should have done a min-max

? Questions: