Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX <=4 Prune if α ≥ β.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Local Search Algorithms
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
CS 4700: Foundations of Artificial Intelligence
Trading optimality for speed…
Imagine that I am in a good mood Imagine that I am going to give you some money ! In particular I am going to give you z dollars, after you tell me the.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Local Search CS311 David Kauchak Spring 2013 Some material borrowed from: Sara Owsley Sood and others.
Local Search and Optimization
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
For Friday Finish chapter 6 Program 1, Milestone 1 due.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Optimization Problems
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Optimization Problems
Optimization via Search
Genetic Algorithms.
CSCI 4310 Lecture 10: Local Search Algorithms
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
CSCI 5582 Artificial Intelligence
Optimization Problems
Heuristics Local Search
Informed search algorithms
More on Search: A* and Optimization
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Heuristics Local Search
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Search.
Search.
CSC 380: Design and Analysis of Algorithms
Beyond Classical Search
Local Search Algorithms
Presentation transcript:

Games: Expectimax MAX MIN MAX Prune if α ≥ β

Games: Expectimax MAX MIN MAX <=4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=6 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=6 >=3 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=6 >=4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=6 4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=4 4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=4 4 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=4 4 Doesn’t change Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=4 4 <=3 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 >=4 <=4 4 <=3 Prune if α ≥ β

Games: Expectimax MAX MIN MAX 4 [4] <=4 4 <=3 Prune if α ≥ β

function αβSEARCH(state) v = MAX-VALUE(state, -infinity, +infinity) return v function MAX-VALUE(state, α, β) returns a utility value v if isLeaf(state) then return UTILITY(state) v = -infinity for each sucessor c of state: do v = MAX(v, MIN-VALUE(c), α, β)) if v ≥ β then return v α = MAX(α, v) return v function MIN-VALUE(state, α, β) returns a utility value v if isLeaf(state) then return UTILITY(state) v = infinity for each sucessor c of state: do v = MIN(v, MAX-VALUE(c, α, β)) if v ≤ α then return v β = MIN(β, v) return v

Stochastic Games

Modify Minimax Tree? MAX MIN MAX knows his/her possible next moves, but not MIN’s Why no randomness for MAX’s moves?

Modify Minimax Tree? ? ? MAX MIN

Modify Minimax Tree chance nodes MAX MIN

Modify Minimax Tree chance nodes MAX MIN Now MAX maximizes expected minimax value

Quick Review of Expected Value

Expectimax MAX MIN CHANCE

Expectimax MAX MIN CHANCE

Expectimax MAX MIN CHANCE

Pruning Expectimax Trees? MAX MIN >=2? CHANCE

Pruning Expectimax Trees? MAX MIN >=2? No! CHANCE

Pruning Expectimax Trees Pruning is much more difficult – Possible if there is a bound on the utility values (not always very realistic) – Instead, use evaluation functions and depth- limited search

Searching in General

Recall N-Queens problem

N-Queens problem What is the depth? –8–8 What is the branching factor? – ≤ 8 How many nodes? – 8 8 = 17 million nodes Do we care about the path? What do we really care about?

Local search Key difference: we don’t care about the path to the solution, only the solution itself! Other similar problems? – Sudoku – Crossword puzzles – Scheduling –…–…

Local Search Approach Start with a random configuration Repeat until goal: generate a set of “local” next states (successors) move to one of these next states

Local Search Approach Start with a random configuration Repeat until goal: generate a set of “local” next states (successors) move to one of these next states Requirements: – ability to generate an initial, random state – generate the set of next states – criterion for evaluating which state to choose next

Example: 4 Queens Generating random state: – One queen per column Generate new states: – move queen in column Goal test: – no attacks Evaluation? (assume a cost, so lower the better)

Example: 4 Queens Generating random state: – One queen per column Generate new states: – move queen in column Goal test: – no attacks Evaluation? (assume a cost, so lower the better) – eval(state) = number of attacked queens

Local Search Approach Start with a random configuration Repeat until goal: generate a set of “local” next states (successors) move to one of these next states  Which one? Requirements: – ability to generate an initial, random state – generate the set of next states – criterion for evaluating which state to choose next

Hill-Climbing hillClimbing(problem): currentNode = makeRandomNode(problem) while True: nextNode = getLowestSuccessor(currentNode) if eval(currentNode) <= eval(nextNode): return currentNode //else currentNode = nextNode

Example: n-queens 3 steps! eval(state) = 4eval(state) = 3 eval(state) = 2 Goal!

Hill-Climbing Problems? hillClimbing(problem): currentNode = makeRandomNode(problem) while True: nextNode = getLowestSuccessor(currentNode) if eval(currentNode) <= eval(nextNode): return currentNode //else currentNode = nextNode

Hill-Climbing Problems? Does not look beyond successors of current state Does not remember previous states “Like climbing Everest in dense fog while suffering amnesia.” hillClimbing(problem): currentNode = makeRandomNode(problem) while True: nextNode = getLowestSuccessor(currentNode) if eval(currentNode) <= eval(nextNode): return currentNode //else currentNode = nextNode

Hill-climbing search: 8-queens problem After 5 moves eval(state) = 1 Initial State (ignore numbers) Problem with this state?

Problems with hill-climbing Current State Local minimum (where we end up) Global minimum (where we want to end up)

Idea 1: restart! Random-restart hill climbing – if we find a local minima start over again at a new random location Pros: Cons:

Idea 1: restart! Random-restart hill climbing – if we find a local minima start over again at a new random location Pros: – simple – no memory increase – for n-queens, usually a few restarts gets us there the 3 million queens problem can be solve in < 1 min! Cons: – if space has a lot of local minima, will have to restart a lot – loses any information we learned in the first search – sometimes we may not know we’re in a local minima/maxima

Idea 2: introduce randomness Rather than always selecting the best, pick a random move with some probability sometimes pick best, sometimes random make better states more likely, worse states less likely book just gives one… many ways of introducing randomness! hillClimbing(problem): currentNode = makeRandomNode(problem) while True: nextNode = getLowestSuccessor(currentNode) if eval(currentNode) <= eval(nextNode): return currentNode //else currentNode = nextNode

Randomness with simulated annealing What the does the term annealing mean? “When I proposed to my wife I was annealing down on one knee”?

Idea 3: simulated annealing What the does the term annealing mean?

Simulated annealing Early on, lots of randomness – avoids getting stuck in local minima – avoids getting lost on a plateau As time progresses, allow less and less randomness randomness time

Simulated Annealing Start with a random configuration Repeat until goal: generate a set of “local” next states (successors) move to a random next state

Simulated Annealing Start with a random configuration Repeat until goal: generate a set of “local” next states (successors) move to a random next state

Idea 3: why just 1 state? Local beam search: keep track of k states – Start with k randomly generated states – At each iteration, all the successors of all k states are generated If any one is a goal state stop – else select the k best successors from the complete list and repeat

Idea 3: why just 1 state? Local beam search: keep track of k states – Start with k randomly generated states – At each iteration, all the successors of all k states are generated If any one is a goal state stop – else select the k best successors from the complete list and repeat Cons? uses more memory over time, set of states can become very similar

Idea 5: genetic algorithms Start with a set of k states Rather than pick from these, create new states by combining states Maintain a “population” of states

Idea 5: genetic algorithms Start with a set of k states Rather than pick from these, create new states by combining states Maintain a “population” of states Inspired by the biological evolution process Uses concepts of “Natural Selection” and “Genetic Inheritance” (Darwin 1859) Originally developed by John Holland (1975)

The Algorithm Randomly generate an initial population. Repeat the following: 1.Select parents and “reproduce” the next generation 2.Randomly mutate some 3.Evaluate the fitness of the new generation 4.Discard old generation and keep some of the best from the new generation

Parent 1 Parent Child 1 Child 2 Genetic Algorithm Crossover

Parent 1 Parent Child 1 Child 2 Mutation Genetic Algorithm Mutation

Genetic algorithms on 8-Queens eval(state) = 7 eval(state) = 6

Genetic algorithms eval(state) = 7 eval(state) = 6 eval(state) = 5