Seminar on Game Playing in AI. 1/21/2014University College of Engineering2 Definition…. Game Game playing is a search problem defined by: 1. Initial state.

Slides:



Advertisements
Similar presentations
Chapter 6, Sec Adversarial Search.
Advertisements

Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
CMSC 471 Spring 2014 Class #9 Tue 2/25/14 Game Playing Professor Marie desJardins,
Game Playing CS 63 Chapter 6
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Computers playing games. One-player games Puzzle: Place 8 queens on a chess board so that no two queens attack each other (i.e. on the same row, same.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Lecture 02 – Part C Game Playing: Adversarial Search
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
CSC 412: AI Adversarial Search
PSU CS 370 – Introduction to Artificial Intelligence Game MinMax Alpha-Beta.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Game Playing.
AD FOR GAMES Lecture 4. M INIMAX AND A LPHA -B ETA R EDUCTION Borrows from Spring 2006 CS 440 Lecture Slides.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Adversarial Search Chapter 6 Section 1 – 4. Search in an Adversarial Environment Iterative deepening and A* useful for single-agent search problems What.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Paula Matuszek, CSC 8520, Fall Based in part on aima.eecs.berkeley.edu/slides-ppt 1 CS 8520: Artificial Intelligence Adversarial Search Paula Matuszek.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
Adversarial Search Chapter 6 Section 1 – 4. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search 1 (Game Playing)
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
Artificial Intelligence AIMA §5: Adversarial Search
Adversarial Search and Game-Playing
Announcements Homework 1 Full assignment posted..
Last time: search strategies
PENGANTAR INTELIJENSIA BUATAN (64A614)
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Adversarial Search.
Game Playing.
Adversarial Search Chapter 5.
Game Playing in AI by: Gaurav Phapale 05 IT 6010
Adversarial Search.
Artificial Intelligence
Game playing.
Artificial Intelligence
Game Playing Fifth Lecture 2019/4/11.
Adversarial Search CMPT 420 / CMPG 720.
Adversarial Search CS 171/271 (Chapter 6)
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
Unit II Game Playing.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Seminar on Game Playing in AI

1/21/2014University College of Engineering2 Definition…. Game Game playing is a search problem defined by: 1. Initial state 2. Successor function 3. Goal test 4. Path cost / utility / payoff function

1/21/2014University College of Engineering3 Types of Games Perfect Information Game: In which player knows all the possible moves of himself and opponent and their results. E.g. Chess. Imperfect Information Game: In which player does not know all the possible moves of the opponent. E.g. Bridge since all the cards are not visible to player.

1/21/2014University College of Engineering4 Characteristics of game playing Unpredictable Opponent. Time Constraints.

1/21/2014University College of Engineering5 Typical structure of the game in AI 2- person game Players alternate moves Zero-sum game: one players loss is the others gain Perfect information: both players have access to complete information about the state of the game. No information is hidden from either player. No chance (e.g. using dice) involved E.g. Tic- Tac- Toe, Checkers, Chess

1/21/2014University College of Engineering6 Game Tree Tic – Tac – Toe Game Tree Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering7 MAX

1/21/2014University College of Engineering8 MAX cont..

1/21/2014University College of Engineering9 MINIMAX.. 2 players.. MIN and MAX. Utility of MAX = - (Utility of MIN). Utility of game = Utility of MAX. MIN tries to decrease utility of game. MAX tries to increase utility of game.

1/21/2014University College of Engineering10 MINIMAX Tree.. Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering11 Assignment of MINIMAX Values Minimax_value(u) { //u is the node you want to score if u is a leaf return score of u; else if u is a min node for all children of u: v1,.. vn ; return min (Minimax_value(v1),..., Minimax_value(vn)) else for all children of u: v1,.. vn ; return max (Minimax_value(v1),..., Minimax_value(vn)) }

1/21/2014University College of Engineering12 MINIMAX Algorithm.. Function MINIMAX-DECISION (state) returns an operator For each op in OPERATORS[game] do VALUE [op] = MINIMAX-VALUE (APPLY (op, state), game) End Return the op with the highest VALUE [op] Function MINIMAX-VALUE (state, game) returns a utility value If TERMINAL-TEST (state) then Return UTILITY (state) Else If MAX is to move in state then Return the highest MINIMAX-VALUE of SUCCESSORS (state) Else Return the lowest MINIMAX-VALUE of SUCCESSORS (state)

1/21/2014University College of Engineering13 Properties of MINIMAX Complete: Yes, if tree is finite Optimal: Yes, against an optimal opponent. Time: O(b d ) (depth- first exploration) Space: O(bd) (depth- first exploration) b: Branching Factor d: Depth of Search Tree Time constraints does not allow the tree to be fully explored. How to get the utility values without exploring search tree up to leaves?

1/21/2014University College of Engineering14 Evaluation Function Evaluation function or static evaluator is used to evaluate the goodness of a game position. The zero-sum assumption allows us to use a single evaluation function to describe the goodness of a position with respect to both players. E.g. f(n) is the evaluation function of the position n. Then, – f(n) >> 0: position n is good for me and bad for you – f(n) << 0: position n is bad for me and good for you – f(n) near 0: position n is a neutral position

1/21/2014University College of Engineering15 Evaluation Function cont.. One of the evaluation function for Tic- Tac- Toe can be defined as: f( n) = [# of 3- lengths open for me] - [# of 3- lengths open for you] where a 3- length is a complete row, column, or diagonal

1/21/2014University College of Engineering16 Alpha Beta Pruning At each MAX node n, alpha(n) = maximum value found so far At each MIN node n, beta(n) = minimum value found so far

1/21/2014University College of Engineering17 Alpha Beta Pruning Cont.. Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering18 Alpha Beta Pruning Cont.. Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering19 Alpha Beta Pruning Cont.. Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering20 Alpha Beta Pruning Cont.. Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering21 Alpha Beta Pruning Cont.. Ref: Lehre/Einfuehrung-KI-SS2003/folien06.pdf

1/21/2014University College of Engineering22 Effectiveness of Alpha Beta Pruning. Worst-Case branches are ordered so that no pruning takes place alpha-beta gives no improvement over exhaustive search Best-Case each players best move is the left-most alternative (i.e., evaluated first) In practice often get O(b (d/2) ) rather than O(b d ) e.g., in chess go from b ~ 35 to b ~ 6 this permits much deeper search in the same amount of time makes computer chess competitive with humans!

1/21/2014University College of Engineering23 Iterative Deepening Search. IDS runs alpha-beta search with an increasing depth- limit The inner loop is a full alpha-beta search with a specified depth limit m When the clock runs out we use the solution found at the previous depth limit

1/21/2014University College of Engineering24 Applications Entertainment Economics Military Political Science

1/21/2014University College of Engineering25 Conclusion Game theory remained the most interesting part of AI from the birth of AI. Game theory is very vast and interesting topic. It mainly deals with working in the constrained areas to get the desired results. They illustrate several important points about Artificial Intelligence like perfection can not be attained but we can approximate to it.

1/21/2014University College of Engineering26 References 'Artificial Intelligence: A Modern Approach' (Second Edition) by Stuart Russell and Peter Norvig, Prentice Hall Pub. Theodore L. Turocy, Texas A&M University, Bernhard von Stengel, London School of Economics "Game Theory" CDAM Research Report Oct SS2003/folien06.pdf

Thank you……