Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.

Slides:



Advertisements
Similar presentations
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal.
Advertisements

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Search in AI.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Artificial Intelligence in Game Design
Mahgul Gulzai Moomal Umer Rabail Hafeez
Problem Solving Using Search Reduce a problem to one of searching a graph. View problem solving as a process of moving through a sequence of problem states.
This time: Outline Game playing The minimax algorithm
CS 561, Sessions Administrativia Assignment 1 due tuesday 9/24/2002 BEFORE midnight Midterm exam 10/10/2002.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Game Playing CSC361 AI CSC361: Game Playing.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
CS 460, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
Blind Search-Part 2 Ref: Chapter 2. Search Trees The search for a solution can be described by a tree - each node represents one state. The path from.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
For Wednesday Read Weiss, chapter 12, section 2 Homework: –Weiss, chapter 10, exercise 36 Program 5 due.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search 2 (Game Playing)
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Search CSC 358/ Outline Homework #6 Game search States and operators Issues Search techniques DFS, BFS Beam search A* search Alpha-beta.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Search: Games & Adversarial Search Artificial Intelligence CMSC January 28, 2003.
CIS 350 – I Game Programming Instructor: Rolf Lakaemper.
Adversarial Search and Game-Playing
Announcements Homework 1 Full assignment posted..
Last time: search strategies
Iterative Deepening A*
Games with Chance Other Search Algorithms
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Games & Adversarial Search
Games & Adversarial Search
Games & Adversarial Search
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Graph Search II GAM 376 Robin Burke

Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based games Alpha-beta search

Homework #3 // calculate leader's collision box (factor in closing velocity) // not accurate, but OK since the offset pursuit has us closing in on the leader // this a worst-case assumption. double boxLength = Prm.MinDetectionBoxLength + ((leader->Speed()+m_pVehicle->Speed())/leader->MaxSpeed()) * Prm.MinDetectionBoxLength; m_pVehicle->World()->TagVehiclesWithinViewRange((BaseGameEntity*)leader, boxLength); if (m_pVehicle->IsTagged()) { // if so, calculate position in leader space // and generate force in the opposite direction // proportional to how close the leader is. Vector2D LeaderMyPos = PointToLocalSpace(m_pVehicle->Pos(), leader->Heading(), leader->Side(), leader->Pos()); if (LeaderMyPos.x > 0 && fabs(LeaderMyPos.y) < boxWidth) // I'm in front and close { //the closer the leader is, the stronger the //steering force should be double multiplier = 5.0 * (boxLength - LeaderMyPos.x) / boxLength; // some tinkering required //calculate the lateral force double force = (boxWidth - fabs(LeaderMyPos.y)) * -multiplier; return leader->Side() * force; } // if not, standard offset pursuit

Uninformed Search Depth-first search We continue expanding the most recent edge until we run out of edges backtrack Characteristics minimum memory cost not guaranteed optimal may not terminate if search space is large Iterative-deepening search Do depth-first search with a limited search depth Increase the depth if it fails Characteristics optimal cannot get stuck minimum memory cost search work repeated at each level Breadth-first search We expand all edges at each node before we go on to the next Characteristics guaranteed optimal memory cost can be very high Djikstra's algorithm For graphs with weighted edges We expand the node that has the cheapest path Keep track of cheapest path to each edge Characteristics guaranteed optimal may consider irrelevant paths

Informed search A* Expand the "best" path so far instead of cheapest Best defined as the sum of the path cost and an estimate of the distance to the goal Estimate called the search heuristic Heuristic must underestimate the real cost otherwise, the search is not guaranteed to return the optimal path

Buckland's implementation

Beam search A* shares a problem with BFS memory cost too many nodes not yet expanded We can limit the set of nodes considered by cost throw out all paths of cost > C by size limit the size of the priority queue to size > L throw out nodes of index > L in the queue (not all priority queue algorithms can do this efficiently) Characteristics not guaranteed to be optimal not guaranteed to be complete limited memory cost Iterative Widening if we don't find a solution, increase C (or L) until we do

Bi-directional search Do two searches One starting from the beginning One from the end Look for overlap in the middle This reduces the search depth 2 * b n/2 instead of b n Characteristics can be used with DFS, A*, IDS not compatible with every search space

Turn-based games Use a graph to represent possible courses of action in a turn-based game Basic idea nodes represent the "state" of the game set of cards board position edges are moves changes in game state winning means reaching a particular state defined by the rules Winning strategy is a path through the edges / moves that leads to a winning state

(Practically) Infinite Search What if the goal state is so far away that search won't find it? chess = states greater than the number of atoms in the solar system cannot be searched completely Pick a search depth estimate the "value" of the position at that depth treat that as the "result" of the search Search then becomes finding the best board position after k moves easy enough to store the best node so far and the path (move) to it

What about the opponent? Obviously, our opponent will not pick moves on the path to our winning game What move to predict? Worst case scenario the opponent will do what's best for him To win we need a strategy that will succeed even if the opponent plays his best

Mini-max assumption Assume that the opponent values the game state the opposite from you V me (state) = -V opp (state) At alternate nodes choose the state with maximum f for me or, choose the state with minimum f for the opponent

Mini-max algorithm Build tree with two types of nodes max nodes my move min nodes opp move Perform depth-first search, with iterative deepening Evaluate the board position at each node on a max node, use the max of all children as the value of the parent on a min node, use the min of all children as the value of the parent when search is complete the move that leads to the max child of the current node is the one to take Anytime this is an "anytime" algorithm you can stop the search at any time and you have a best estimate of your move (to some depth)

Problem I may waste time searching nodes that I would never use A* doesn't help since a position may be bad in one move but better after 3 sacrifice

Alpha-beta pruning reduces the size of the search space without changing the answer Simple idea don't consider any moves that are worse than ones you already know about

Animated example 533/W99/presentations/L2_5B_Lima_ Neitz/abpruning.html

What about chance? In a game of chance there is a random element in the game process Backgammon the player can only make moves that use the outcome of the dice roll How do I know what my opponent will do? I don't but I can have an expectation

Expectiminimax The idea Game theoretic utility calculation Expected value = sum of all outcome values * the likelihood of occurrence The value of a node is not simply copied from the "best" child but summed over all possible children

Algorithm Tree has three types of nodes max nodes min nodes chance nodes Chance nodes calculate the expectation associated with all of the children

33/W99/presentations/L2_5B_Lima_N eitz/chance.html

Killer heuristic One additional optimization works well in chess Often a move that is really good or really bad Will be really good or bad in multiple board positions Example a move that captures my queen if my queen is under attack the move in which the opponent takes my queen will be his best move in most board positions except the positions in which I move the queen out of attack If a move leads to a really good or really bad position try it first when searching more likely to produce an extreme value that helps alpha- beta search

Midterm review Midterm topics Finite state machines Steering behaviors Graph search