Utility Theory & MDPs Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Markov Decision Process
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
Games and adversarial search
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Adversarial Search: Game Playing Reading: Chapter next time.
Adversarial Search CSE 473 University of Washington.
Artificial Intelligence for Games Game playing Patrick Olivier
Advanced Artificial Intelligence
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Games Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Markov Decision Processes
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010 Lecture 16: MEU / Utilities Oct 8, 2010 A subset of Lecture 8 slides of Dan Klein – UC.
Games and adversarial search
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
How computers play games with you CS161, Spring ‘03 Nathan Sturtevant.
double AlphaBeta(state, depth, alpha, beta) begin if depth
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Game Playing ECE457 Applied Artificial Intelligence Spring 2007 Lecture #5.
Reinforcement Learning Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Games Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew Moore,
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 14 Friday, 10 September.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Games 1 Alpha-Beta Example [-∞, +∞] Range of possible values Do DF-search until first leaf.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
Games and adversarial search (Chapter 5)
MDPs (cont) & Reinforcement Learning
CSE373: Data Structures & Algorithms Lecture 23: Intro to Artificial Intelligence and Game Theory Based on slides adapted Luke Zettlemoyer, Dan Klein,
Artificial Intelligence
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
1 Chapter 6 Game Playing. 2 Chapter 6 Contents l Game Trees l Assumptions l Static evaluation functions l Searching game trees l Minimax l Bounded lookahead.
5/4/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Artificial Intelligence AIMA §5: Adversarial Search
Games and adversarial search (Chapter 5)
Stochastic tree search and stochastic games
4. Games and adversarial search
Games and adversarial search (Chapter 5)
CS440/ECE448 Lecture 9: Minimax Search
Markov Decision Processes
Adversarial Search Chapter 5.
Expectimax Lirong Xia. Expectimax Lirong Xia Project 2 MAX player: Pacman Question 1-3: Multiple MIN players: ghosts Extend classical minimax search.
Adversarial Search.
Expectimax Lirong Xia.
Games & Adversarial Search
Adversarial Search CMPT 420 / CMPG 720.
Adversarial Search CS 171/271 (Chapter 6)
Adversarial Search Chapter 6 Section 1 – 4.
Presentation transcript:

Utility Theory & MDPs Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew Moore, Percy Liang, Luke Zettlemoyer

Announcements HW2 will be online tomorrow –Due Oct 8 (but make sure to start early!) As always, you can work in groups of up to 3 and submit 1 written/coding solution (pairs don’t need to be the same as HW1)

AI/Games in the news Sept 14, 2015

Review from last class

A more abstract game tree Terminal utilities (for MAX) A two-ply game

Computing the minimax value of a node Minimax(node) =  Utility(node) if node is terminal  max action Minimax(Succ(node, action)) if player = MAX  min action Minimax(Succ(node, action)) if player = MIN 322 3

Alpha-beta pruning It is possible to compute the exact minimax decision without expanding every node in the game tree

Alpha-beta pruning It is possible to compute the exact minimax decision without expanding every node in the game tree 3 33

Alpha-beta pruning It is possible to compute the exact minimax decision without expanding every node in the game tree 3 33 22

Alpha-beta pruning It is possible to compute the exact minimax decision without expanding every node in the game tree 3 33 22  14

Alpha-beta pruning It is possible to compute the exact minimax decision without expanding every node in the game tree 3 33 22 55

Alpha-beta pruning It is possible to compute the exact minimax decision without expanding every node in the game tree 3 3 22 2

Resource Limits Alpha-beta still has to search all the way to terminal states for portions of the search space. Instead we can cut off search earlier and apply a heuristic evaluation function. –Search a limited depth of tree –Replace terminal utilities with evaluation function for non-terminal positions. –Performance of the program highly depends on its evaluation function.

Evaluation function Cut off search at a certain depth and compute the value of an evaluation function for a state instead of its minimax value –The evaluation function may be thought of as the probability of winning from a given state or the expected value of that state A common evaluation function is a weighted sum of features: Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + … + w n f n (s) Evaluation functions may be learned from game databases or by having the program play many games against itself

Cutting off search Horizon effect: you may incorrectly estimate the value of a state by overlooking an event that is just beyond the depth limit –For example, a damaging move by the opponent that can be delayed but not avoided Possible remedies –Quiescence search: do not cut off search at positions that are unstable – for example, are you about to lose an important piece? –Singular extension: a strong move that should be tried when the normal depth limit is reached

Additional techniques Transposition table to store previously expanded states Forward pruning to avoid considering all possible moves Lookup tables for opening moves and endgames

Iterative deepening search Use DFS as a subroutine 1.Check the root 2.Do a DFS with depth limit 1 3.If there is no path of length 1, do a DFS search with depth limit 2 4.If there is no path of length 2, do a DFS with depth limit 3. 5.And so on… Why might this be useful for multi-player games?

Chess playing systems Baseline system: 200 million node evaluations per move (3 min), minimax with a decent evaluation function and quiescence search –5-ply ≈ human novice Add alpha-beta pruning –10-ply ≈ typical PC, experienced player Deep Blue: 30 billion evaluations per move, singular extensions, evaluation function with 8000 features, large databases of opening and endgame moves –14-ply ≈ Garry Kasparov Recent state of the art (Hydra, ca. 2006): 36 billion evaluations per second, advanced pruning techniquesHydra –18-ply ≈ better than any human alive?

Games of chance How to incorporate dice throwing into the game tree?

Maximum Expected Utility Why should we calculate expected utility? Principle of maximum expected utility: an agent should choose the action which maximizes its expected utility, given its knowledge General principle for decision making (definition of rationality)

Reminder: Expectations The expected value of a function is its average value, weighted by the probability distribution over inputs Example: How long to get to the airport? Length of driving time as a function of traffic: L(none)=20, L(light)=30, L(heavy)=60 P(T)={none=0.25, light=0.5, heavy=0.25} What is my expected driving time: E[L(T)]? E[L(T)] = L(none)*P(none)+L(light)*P(light)+L(heavy)*P(heavy) E[L(T)] = (20*.25) + (30*.5) + (60*0.25) = 35

Games of chance

Expectiminimax: for chance nodes, average values weighted by the probability of each outcome –Nasty branching factor, defining evaluation functions and pruning algorithms more difficult Monte Carlo simulation: when you get to a chance node, simulate a large number of games with random dice rolls and use win percentage as evaluation function –Can work well for games like Backgammon

Partially observable games Card games like bridge and poker Monte Carlo simulation: deal all the cards randomly in the beginning and pretend the game is fully observable –“Averaging over clairvoyance” –Problem: this strategy does not account for bluffing, information gathering, etc.

Origins of game playing algorithms Minimax algorithm: Ernst Zermelo, 1912, first published in 1928 by John von NeumannJohn von Neumann Chess playing with evaluation function, quiescence search, selective search: Claude Shannon, 1949 (paper)paper Alpha-beta search: John McCarthy, 1956 Checkers program that learns its own evaluation function by playing against itself: Arthur Samuel, 1956

Game playing algorithms today Computers are better than humans: –Checkers: solved in 2007solved in 2007 –Chess: IBM Deep Blue defeated Kasparov in 1997 Computers are competitive with top human players: –Backgammon: TD-Gammon system used reinforcement learning to learn a good evaluation functionTD-Gammon system –Bridge: top systems use Monte Carlo simulation and alpha-beta search Computers are not competitive: –Go: branching factor 361. Existing systems use Monte Carlo simulation and pattern databases

See also:

Utility Theory

Maximum Expected Utility Principle of maximum expected utility: an agent should choose the action which maximizes its expected utility, given its knowledge General principle for decision making (definition of rationality) Where do utilities come from?

Why MEU?

Utility Scales Normalized Utilities: u + =1.0, u - =0.0 Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc

Human Utilities How much do people value their lives? –How much would you pay to avoid a risk, e.g. Russian roulette with a million-barreled revolver (1 micromort)? –Driving in a car for 230 miles incurs a risk of 1 micromort.

Measuring Utilities Worst possible catastrophe Best possible prize

Stochastic, sequential environments (Chapter 17) Image credit: P. Abbeel and D. Klein Markov Decision Processes

Components: –States s, beginning with initial state s 0 –Actions a Each state s has actions A(s) available from it –Transition model P(s’ | s, a) Markov assumption: the probability of going to s’ from s depends only on s and a and not on any other past actions or states –Reward function R(s) Policy  (s): the action that an agent takes in any given state –The “solution” to an MDP

Overview First, we will look at how to “solve” MDPs, (find the optimal policy when the transition model and the reward function are known) Second, we will consider reinforcement learning, where we don’t know the rules of the environment or the consequences of our actions

Game show A series of questions with increasing level of difficulty and increasing payoff Decision: at each step, take your earnings and quit, or go for the next question –If you answer wrong, you lose everything Q1Q2Q3Q4 Correct Incorrect: $0 Correct Incorrect: $0 Quit: $100 Correct Incorrect: $0 Quit: $1,100 Correct: $61,100 Incorrect: $0 Quit: $11,100 $100 question $1,000 question $10,000 question $50,000 question

Game show Consider $50,000 question –Probability of guessing correctly: 1/10 –Quit or go for the question? What is the expected payoff for continuing? 0.1 * 61, * 0 = 6,110 What is the optimal decision? Q1Q2Q3Q4 Correct Incorrect: $0 Correct Incorrect: $0 Quit: $100 Correct Incorrect: $0 Quit: $1,100 Correct: $61,100 Incorrect: $0 Quit: $11,100 $100 question $1,000 question $10,000 question $50,000 question 9/103/41/2 1/10

Game show What should we do in Q3? –Payoff for quitting: $1,100 –Payoff for continuing: 0.5 * $11,100 = $5,550 What about Q2? –$100 for quitting vs. $4,162 for continuing What about Q1? Q1Q2Q3Q4 Correct Incorrect: $0 Correct Incorrect: $0 Quit: $100 Correct Incorrect: $0 Quit: $1,100 Correct: $61,100 Incorrect: $0 Quit: $11,100 $100 question $1,000 question $10,000 question $50,000 question 9/103/41/2 1/10 U = $11,100U = $5,550U = $4,162U = $3,746

Grid world R(s) = for every non-terminal state Transition model: Source: P. Abbeel and D. Klein

Goal: Policy Source: P. Abbeel and D. Klein

Grid world R(s) = for every non-terminal state Transition model:

Grid world Optimal policy when R(s) = for every non-terminal state

Grid world Optimal policies for other values of R(s):