Search CSC 358/458 5.22.2006. Outline Homework #6 Game search States and operators Issues Search techniques DFS, BFS Beam search A* search Alpha-beta.

Slides:



Advertisements
Similar presentations
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Advertisements

For Friday Finish chapter 5 Program 1, Milestone 1 due.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Search in AI.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Mahgul Gulzai Moomal Umer Rabail Hafeez
Problem Solving Using Search Reduce a problem to one of searching a graph. View problem solving as a process of moving through a sequence of problem states.
This time: Outline Game playing The minimax algorithm
Review: Search problem formulation
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Game Playing CSC361 AI CSC361: Game Playing.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Introduction to Artificial Intelligence Blind Search Ruth Bergman Fall 2004.
CS 460, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Blind Search-Part 2 Ref: Chapter 2. Search Trees The search for a solution can be described by a tree - each node represents one state. The path from.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
CSC344: AI for Games Lecture 4: Informed search
Introduction to Artificial Intelligence Blind Search Ruth Bergman Fall 2002.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
Informed Search Idea: be smart about what paths to try.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Search CSC 358/ Outline  Homework #7  GPS States and operators Issues  Search techniques DFS, BFS Beam search A* search Alpha-beta search.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Adversarial Search 2 (Game Playing)
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
CIS 350 – I Game Programming Instructor: Rolf Lakaemper.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Iterative Deepening A*
Artificial Intelligence Problem solving by searching CSC 361
The A* Algorithm Héctor Muñoz-Avila.
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Search.
Search.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Search CSC 358/

Outline Homework #6 Game search States and operators Issues Search techniques DFS, BFS Beam search A* search Alpha-beta search

Homework #7 #C with-list-iterator doiter

Game Playing How can we automate game playing? One of the first problems tackled by AI research Basic idea represent the "state" of the game set of cards board position moves are changes in game state winning means reaching a particular state defined by the rules

Game tree Think of each possible position as a node each possible move as an edge We have a graph structure starting state subsequent states branching for different possible moves terminates with winning (or losing)

How to win? Find a path through the tree to a winning state make all the moves along that path But what about the opponent? what about uncertainty? we'll return to these questions in a minute

Graph search Game tree search is a special case of graph search lots of other AI problems have been conceptualized the same way Search domains Running Prolog programs each state is an assignment of bindings links are applying rules to generate new bindings Planning and scheduling nodes are states of the world links are operations that can be performed Major subfield of AI

Planning States are combinations of predicates Operators may have conditional effects Interleaving of planning and execution replanning

What we need Start State Goal State Successors Search Strategy

Tree Search

Tree Search, cont'd Main question How to order the states?

Tree Search Cont’d (defun tree-search (states goal-p successors combiner) (cond ((null states) fail) ((funcall goal-p (first states)) (first states)) (t (tree-search (funcall combiner (funcall successors (first states)) (rest states)) goal-p successors combiner))))

Tree Search: Depth First Search Work On The Longest Paths First Backtrack Only When The Current State Has No More Successors (defun depth-first-search (start goal-p successors) (tree-search (list start) goal-p successors #’append))

Tree Search: DFS Summary Depth-First Search Is OK In Finite Search Spaces In Infinite Search Spaces, Depth-First Search May Never Terminate

Tree Search: Breadth-First Search Search The Tree Layer By Layer (defun prepend (x y) (append y x)) (defun breadth-first-search (start goal-p successors) (tree-search (list start) goal-p successors #’prepend))

Tree Search: BFS Summary In Finite Search Spaces, BFS Is Identical To DFS In Infinite Search Spaces, BFS Will Always Find A Solution If It Exists BFS Requires More Space Than DFS

Iterative Deepening Search depth first to level n then increase n Seems wasteful but actually is the best method for large spaces of unknown charcteristics the search frontier expands exponentially so it doesn't matter that you're sometimes searching the same (small number of) nodes multiple times

Bi-Directional Search Work forwards from start Work backwards from goal Until the two points meet Doesn't work for many game problems How many different checkmate positions are there?

Controlling Search Knowledge DFS and BFS do not use knowledge of the domain Distance heuristic in many domains, possible to estimate how far from the goal "stronger" board position choose successor (move) that takes you closest

Example Problem: Visit too many nodes, some clearly out of the question

Best First Search (defun sorter (cost-fn) #’(lambda (new old) (sort (append new old) #’< :key cost- fn))) (defun best-first-search (start goal-p successors cost-fn) (tree-search (list start) goal-p successors (sorter cost-fn)))

Greedy Search Best = closest to goal Problem Isn't guaranteed to find a solution not complete Isn't guaranteed to find the best solution not optimal

Greedy example Heuristic: minimize h(n) = “Euclidean distance to destination” Problem: not optimal (through Rimmici Viicea and Pitesti is shorter)

A* Search Best = min (path so far + estimated cost to goal) Restriction estimate must never overestimate the cost If so complete optimal

Example A*: minimize f(n) = g(n) + h(n)

Beam Search Ever-increasing queue of states under consideration Can be very large O(b n ) where b is the branch factor and n is the depth Completeness is required if there is only one solution we don't want to throw out the state that leads to it What if there are many good solutions many possible checkmate positions discard some unpromising states Beam search keep no more than k states of the queue if too many, discard the ones with highest f(n)

Beam Search Cont’d (defun beam-search (start goal-p sucessors cost-fn beam-width) (tree-search (list start) goal-p succecssors #’(lambda (old new) (let ((sorted (funcall (sorter cost-fn) old new))) (if (> beam-width (length sorted)) sorted (subseq sorted 0 beam-width))))))

Improving Beam Search What if the search fails? try different beam widths (defun iter-wide-search (start goal-p successors &key (width 1) (max 100)) (unless (> width max) (or (beam-search start goal-p successors cost-fn width) (iter-wide-search start goal-p successors cost-fn :width (+ width 1) :max max))))

(Practically) Infinite Search What if the goal state is so far away that search won't find it? chess = states greater than the number of atoms in the universe Pick a search depth estimate the "value" of the position at that depth treat that as the "result" of the search Search then becomes finding the best board position after k moves easy enough to store the best node so far and the path (move) to it

What about the opponent? Obviously, our opponent will not pick moves on the path to our winning game What move to predict? Worst case scenario the opponent will do what's best for him To win we need a strategy that will succeed even if the opponent plays his best

Mini-max assumption Assume that the opponent values the game state the opposite from you V me (state) = -V opp (state) At alternate nodes choose the state with maximum f for me or, choose the state with minimum f for the opponent

Mini-max algorithm Build tree with two types of nodes max nodes my move min nodes opp move Perform depth-first search, with iterative deepening Evaluate the board position at each node on a max node, use the max of all children as the value of the parent on a min node, use the min of all children as the value of the parent when search is complete the move that leads to the max child of the current node is the one to take Anytime this is an "anytime" algorithm you can stop the search at any time and you have a best estimate of your move (to some depth)

Problem I may waste time searching nodes that I would never use A* doesn't help since a position may be bad in one move but better after 3 sacrifice

Alpha-beta pruning reduces the size of the search space without changing the answer Simple idea don't consider any moves that are worse than ones you already know about

Animated example 533/W99/presentations/L2_5B_Lima_ Neitz/abpruning.html

What about chance? In a game of chance there is a random element in the game process Backgammon the player can only make moves that use the outcome of the dice roll How do I know what my opponent will do? I don't but I can have an expectation

Expectiminimax The idea Game theoretic utility calculation Expected value = sum of all outcome values * the likelihood of occurrence The value of a node is not simply copied from the "best" child but summed over all possible children

Algorithm Tree has three types of nodes max nodes min nodes chance nodes Chance nodes calculate the expectation associated with all of the children

33/W99/presentations/L2_5B_Lima_N eitz/chance.html

Killer heuristic One additional optimization works well in chess Often a move that is really good or really bad Will be really good or bad in multiple board positions Example a move that captures my queen if my queen is under attack the move in which the opponent takes my queen will be his best move in most board positions except the positions in which I move the queen out of attack If a move leads to a really good or really bad position try it first when searching more likely to produce an extreme value that helps alpha- beta search

No class next week Progress report due tonight 1 or 2 pages of text saying where you are Class on 6/5 CLOS