Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 188: Artificial Intelligence Fall 2009 Lecture 8: MEU / Utilities 9/22/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either.

Similar presentations


Presentation on theme: "CS 188: Artificial Intelligence Fall 2009 Lecture 8: MEU / Utilities 9/22/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either."— Presentation transcript:

1 CS 188: Artificial Intelligence Fall 2009 Lecture 8: MEU / Utilities 9/22/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1

2 Announcements  There is lecture as usual on 9/24  W1 is due today (lecture or drop box)  P2 is due on 9/30  Contest is out today! 2

3 Expectimax for Pacman Minimizing Ghost Random Ghost Minimax Pacman Won 5/5 Avg. Score: 493 Won 5/5 Avg. Score: 483 Expectimax Pacman Won 1/5 Avg. Score: -303 Won 5/5 Avg. Score: 503 [demo: world assumptions] Results from playing 5 games Pacman used depth 4 search with an eval function that avoids trouble Ghost used depth 2 search with an eval function that seeks Pacman

4 Expectimax Search  Chance nodes  Chance nodes are like min nodes, except the outcome is uncertain  Calculate expected utilities  Chance nodes average successor values (weighted)  Each chance node has a probability distribution over its outcomes (called a model)  For now, assume we’re given the model  Utilities for terminal states  Static evaluation functions give us limited-depth search … … 492362 … 400300 Estimate of true expectimax value (which would require a lot of work to compute) 1 search ply

5 Expectimax Quantities 5

6 Expectimax Pruning? 6

7 Expectimax Evaluation  Evaluation functions quickly return an estimate for a node’s true value (which value, expectimax or minimax?)  For minimax, evaluation function scale doesn’t matter  We just want better states to have higher evaluations (get the ordering right)  We call this insensitivity to monotonic transformations  For expectimax, we need magnitudes to be meaningful 040 2030 x2x2 01600 400900

8 Mixed Layer Types  E.g. Backgammon  Expectiminimax  Environment is an extra player that moves after each agent  Chance nodes take expectations, otherwise like minimax ExpectiMinimax-Value(state):

9 Stochastic Two-Player  Dice rolls increase b: 21 possible rolls with 2 dice  Backgammon  20 legal moves  Depth 2 = 20 x (21 x 20) 3 = 1.2 x 10 9  As depth increases, probability of reaching a given search node shrinks  So usefulness of search is diminished  So limiting depth is less damaging  But pruning is trickier…  TDGammon uses depth-2 search + very good evaluation function + reinforcement learning: world-champion level play  1 st AI world champion in any game!

10 Multi-Agent Utilities  Similar to minimax:  Terminals have utility tuples  Node values are also utility tuples  Each player maximizes its own utility  Can give rise to cooperation and competition dynamically… 1,6,61,6,67,1,27,1,26,1,26,1,27,2,17,2,15,1,75,1,71,5,21,5,27,7,17,7,15,2,55,2,5 10

11 Maximum Expected Utility  Principle of maximum expected utility:  A rational agent should chose the action which maximizes its expected utility, given its knowledge  Questions:  Where do utilities come from?  How do we know such utilities even exist?  Why are we taking expectations of utilities (not, e.g. minimax)?  What if our behavior can’t be described by utilities? 11

12 Utilities: Unknown Outcomes 12 Going to airport from home Take surface streets Take freeway Clear, 10 min Traffic, 50 min Clear, 20 min Arrive early Arrive late Arrive on time

13 Preferences  An agent chooses among:  Prizes: A, B, etc.  Lotteries: situations with uncertain prizes  Notation: 13

14 Rational Preferences  We want some constraints on preferences before we call them rational  For example: an agent with intransitive preferences can be induced to give away all of its money  If B > C, then an agent with C would pay (say) 1 cent to get B  If A > B, then an agent with B would pay (say) 1 cent to get A  If C > A, then an agent with A would pay (say) 1 cent to get C 14

15 Rational Preferences  Preferences of a rational agent must obey constraints.  The axioms of rationality:  Theorem: Rational preferences imply behavior describable as maximization of expected utility 15

16 MEU Principle  Theorem:  [Ramsey, 1931; von Neumann & Morgenstern, 1944]  Given any preferences satisfying these constraints, there exists a real-valued function U such that:  Maximum expected likelihood (MEU) principle:  Choose the action that maximizes expected utility  Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities  E.g., a lookup table for perfect tictactoe, reflex vacuum cleaner 16

17 Utility Scales  Normalized utilities: u + = 1.0, u - = 0.0  Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc.  QALYs: quality-adjusted life years, useful for medical decisions involving substantial risk  Note: behavior is invariant under positive linear transformation  With deterministic prizes only (no lottery choices), only ordinal utility can be determined, i.e., total order on prizes 17

18 Human Utilities  Utilities map states to real numbers. Which numbers?  Standard approach to assessment of human utilities:  Compare a state A to a standard lottery L p between  “best possible prize” u + with probability p  “worst possible catastrophe” u - with probability 1-p  Adjust lottery probability p until A ~ L p  Resulting p is a utility in [0,1] 18

19 Money  Money does not behave as a utility function, but we can talk about the utility of having money (or being in debt)  Given a lottery L = [p, $X; (1-p), $Y]  The expected monetary value EMV(L) is p*X + (1-p)*Y  U(L) = p*U($X) + (1-p)*U($Y)  Typically, U(L) < U( EMV(L) ): why?  In this sense, people are risk-averse  When deep in debt, we are risk-prone  Utility curve: for what probability p am I indifferent between:  Some sure outcome x  A lottery [p,$M; (1-p),$0], M large 19

20 Money  Money does not behave as a utility function  Given a lottery L:  Define expected monetary value EMV(L)  Usually U(L) < U(EMV(L))  I.e., people are risk-averse  Utility curve: for what probability p am I indifferent between:  A prize x  A lottery [p,$M; (1-p),$0] for large M?  Typical empirical data, extrapolated with risk-prone behavior: 20

21 Example: Insurance  Consider the lottery [0.5,$1000; 0.5,$0]  What is its expected monetary value? ($500)  What is its certainty equivalent?  Monetary value acceptable in lieu of lottery  $400 for most people  Difference of $100 is the insurance premium  There’s an insurance industry because people will pay to reduce their risk  If everyone were risk-neutral, no insurance needed! 21

22 Example: Insurance  Because people ascribe different utilities to different amounts of money, insurance agreements can increase both parties’ expected utility You own a car. Your lottery: L Y = [0.8, $0 ; 0.2, -$200] i.e., 20% chance of crashing You do not want -$200! U Y (L Y ) = 0.2*U Y (-$200) = -200 U Y (-$50) = -150 Amount Your Utility U Y $00 -$50-150 -$200-1000

23 Example: Insurance  Because people ascribe different utilities to different amounts of money, insurance agreements can increase both parties’ expected utility You own a car. Your lottery: L Y = [0.8, $0 ; 0.2, -$200] i.e., 20% chance of crashing You do not want -$200! U Y (L Y ) = 0.2*U Y (-$200) = -200 U Y (-$50) = -150 Insurance company buys risk: L I = [0.8, $50 ; 0.2, -$150] i.e., $50 revenue + your L Y Insurer is risk-neutral: U(L)=U(EMV(L)) U I (L I ) = U(0.8*50 + 0.2*(-150)) = U($10) > U($0)

24 Example: Human Rationality?  Famous example of Allais (1953)  A: [0.8,$4k; 0.2,$0]  B: [1.0,$3k; 0.0,$0]  C: [0.2,$4k; 0.8,$0]  D: [0.25,$3k; 0.75,$0]  Most people prefer B > A, C > D  But if U($0) = 0, then  B > A  U($3k) > 0.8 U($4k)  C > D  0.8 U($4k) > U($3k) 24

25 Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act so as to maximize expected rewards  Change the rewards, change the learned behavior  Examples:  Playing a game, reward at the end for winning / losing  Vacuuming a house, reward for each piece of dirt picked up  Automated taxi, reward for each passenger delivered  First: Need to master MDPs 25 [DEMOS]

26 Grid World  The agent lives in a grid  Walls block the agent’s path  The agent’s actions do not always go as planned:  80% of the time, the action North takes the agent North (if there is no wall there)  10% of the time, North takes the agent West; 10% East  If there is a wall in the direction the agent would have been taken, the agent stays put  Big rewards come at the end

27 Markov Decision Processes  An MDP is defined by:  A set of states s  S  A set of actions a  A  A transition function T(s,a,s’)  Prob that a from s leads to s’  i.e., P(s’ | s,a)  Also called the model  A reward function R(s, a, s’)  Sometimes just R(s) or R(s’)  A start state (or distribution)  Maybe a terminal state  MDPs are a family of non- deterministic search problems  Reinforcement learning: MDPs where we don’t know the transition or reward functions 27

28 Solving MDPs  In deterministic single-agent search problem, want an optimal plan, or sequence of actions, from start to a goal  In an MDP, we want an optimal policy  *: S → A  A policy  gives an action for each state  An optimal policy maximizes expected utility if followed  Defines a reflex agent Optimal policy when R(s, a, s’) = -0.03 for all non-terminals s [Demo]

29 Example Optimal Policies R(s) = -2.0 R(s) = -0.4 R(s) = -0.03R(s) = -0.01 29

30 30

31 Example: High-Low  Three card types: 2, 3, 4  Infinite deck, twice as many 2’s  Start with 3 showing  After each card, you say “high” or “low”  New card is flipped  If you’re right, you win the points shown on the new card  Ties are no-ops  If you’re wrong, game ends  Differences from expectimax:  #1: get rewards as you go  #2: you might play forever! 2 3 2 4 31

32 High-Low  States: 2, 3, 4, done  Actions: High, Low  Model: T(s, a, s’):  P(s’=done | 4, High) = 3/4  P(s’=2 | 4, High) = 0  P(s’=3 | 4, High) = 0  P(s’=4 | 4, High) = 1/4  P(s’=done | 4, Low) = 0  P(s’=2 | 4, Low) = 1/2  P(s’=3 | 4, Low) = 1/4  P(s’=4 | 4, Low) = 1/4 ……  Rewards: R(s, a, s’):  Number shown on s’ if s  s’  0 otherwise  Start: 3 Note: could choose actions with search. How? 4 32

33 Example: High-Low 3 High Low 2 4 3 HighLow HighLow High Low 3, High, Low 3 T = 0.5, R = 2 T = 0.25, R = 3 T = 0, R = 4 T = 0.25, R = 0 33

34 MDP Search Trees  Each MDP state gives an expectimax-like search tree a s s’ s, a (s,a,s’) called a transition T(s,a,s’) = P(s’|s,a) R(s,a,s’) s,a,s’ s is a state (s, a) is a q-state 34

35 Utilities of Sequences  In order to formalize optimality of a policy, need to understand utilities of sequences of rewards  Typically consider stationary preferences:  Theorem: only two ways to define stationary utilities  Additive utility:  Discounted utility: Assuming that reward depends only on state for these slides! 35

36 Infinite Utilities?!  Problem: infinite sequences with infinite rewards  Solutions:  Finite horizon:  Terminate after a fixed T steps  Gives nonstationary policy (  depends on time left)  Absorbing state(s): guarantee that for every policy, agent will eventually “die” (like “done” for High-Low)  Discounting: for 0 <  < 1  Smaller  means smaller “horizon” – shorter term focus 36

37 Discounting  Typically discount rewards by  < 1 each time step  Sooner rewards have higher utility than later rewards  Also helps the algorithms converge 37

38 Optimal Utilities  Fundamental operation: compute the optimal utilities of states s  Define the utility of a state s: V * (s) = expected return starting in s and acting optimally  Define the utility of a q-state (s,a): Q * (s,a) = expected return starting in s, taking action a and thereafter acting optimally  Define the optimal policy:  * (s) = optimal action from state s a s s, a s,a,s’ s’ 38

39 The Bellman Equations  Definition of utility leads to a simple relationship amongst optimal utility values: Optimal rewards = maximize over first action and then follow optimal policy  Formally: a s s, a s,a,s’ s’ 39

40 Solving MDPs  We want to find the optimal policy  *  Proposal 1: modified expectimax search: a s s, a s,a,s’ s’ 40

41 MDP Search Trees?  Problems:  This tree is usually infinite (why?)  The same states appear over and over (why?)  There’s actually one tree per state (why?)  Ideas:  Compute to a finite depth (like expectimax)  Consider returns from sequences of increasing length  Cache values so we don’t repeat work 41

42 Value Estimates  Calculate estimates V k * (s)  Not the optimal value of s!  The optimal value considering only next k time steps (k rewards)  As k  , it approaches the optimal value  Why:  If discounting, distant rewards become negligible  If terminal states reachable from everywhere, fraction of episodes not ending becomes negligible  Otherwise, can get infinite expected utility and then this approach actually won’t work 42

43 Memoized Recursion?  Recurrences:  Cache all function call results so you never repeat work  What happened to the evaluation function? 43

44 Mixed Layer Types  E.g. Backgammon  Expectiminimax  Environment is an extra player that moves after each agent  Chance nodes take expectations, otherwise like minimax 44

45 Stochastic Two-Player  Dice rolls increase b: 21 possible rolls with 2 dice  Backgammon  20 legal moves  Depth 4 = 20 x (21 x 20) 3 1.2 x 10 9  As depth increases, probability of reaching a given node shrinks  So value of lookahead is diminished  So limiting depth is less damaging  But pruning is less possible…  TDGammon uses depth-2 search + very good eval function + reinforcement learning: world- champion level play 45

46 Expectimax Evaluation  For minimax search, terminal utilities are insensitive to monotonic transformations  We just want better states to have higher evaluations (get the ordering right)  For expectimax, we need the magnitudes to be meaningful as well  E.g. must know whether a 50% / 50% lottery between A and B is better than 100% chance of C  100 or -10 vs 0 is different than 10 or -100 vs 0  How do we know such utilities even exist? 46


Download ppt "CS 188: Artificial Intelligence Fall 2009 Lecture 8: MEU / Utilities 9/22/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either."

Similar presentations


Ads by Google