Making Simple Decisions

Slides:



Advertisements
Similar presentations
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC 671 – Fall 2005 material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Making Simple Decisions Chapter 16 Some material borrowed from Jean-Claude Latombe and Daphne Koller by way of Marie desJadines,
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Chapter 21 Statistical Decision Theory
Chapter 3 Decision Analysis.
Managerial Decision Modeling with Spreadsheets
DSC 3120 Generalized Modeling Techniques with Applications
THE HONG KONG UNIVERSITY OF SCIENCE & TECHNOLOGY CSIT 5220: Reasoning and Decision under Uncertainty L09: Graphical Models for Decision Problems Nevin.
Decision Analysis Introduction Chapter 6. What kinds of problems ? Decision Alternatives (“what ifs”) are known States of Nature and their probabilities.

CPSC 502, Lecture 11Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 11 Oct, 18, 2011.
CPSC 322, Lecture 35Slide 1 Finish VE for Sequential Decisions & Value of Information and Control Computer Science cpsc322, Lecture 35 (Textbook Chpt 9.4)
Decision Theory: Single Stage Decisions Computer Science cpsc322, Lecture 33 (Textbook Chpt 9.2) March, 30, 2009.
Making Simple Decisions Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 16.
Bayesian Belief Networks
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2005.
CS 188: Artificial Intelligence Fall 2009 Lecture 17: Bayes Nets IV 10/27/2009 Dan Klein – UC Berkeley.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 16-1 Chapter 16 Decision Making Statistics for Managers Using Microsoft.
Decision Making Under Uncertainty Russell and Norvig: ch 16 CMSC421 – Fall 2006.
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2003 material from Jean-Claude Latombe, and Daphne Koller.
CS 188: Artificial Intelligence Fall 2009 Lecture 18: Decision Diagrams 10/29/2009 Dan Klein – UC Berkeley.
CMSC 671 Fall 2003 Class #26 – Wednesday, November 26 Russell & Norvig 16.1 – 16.5 Some material borrowed from Jean-Claude Latombe and Daphne Koller by.
CSCI 121 Special Topics: Bayesian Network Lecture #1: Reasoning Under Uncertainty.
“ The one word that makes a good manager – decisiveness.”
Engineering Economic Analysis Canadian Edition
Making Simple Decisions
Decision & Risk Analysis Influence Diagrams, Decision Trees NOTE: Some materials for this presentation courtesy of Dr. Dan Maxwell Reference: Clemen &
Making Simple Decisions Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 16.
Introduction to Bayesian Networks
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CS 188: Artificial Intelligence Fall 2006 Lecture 18: Decision Diagrams 10/31/2006 Dan Klein – UC Berkeley.
Decision theory under uncertainty
Decision Making Under Uncertainty CMSC 671 – Fall 2010 R&N, Chapters , , material from Lise Getoor, Jean-Claude Latombe, and.
Making Simple Decisions Utility Theory MultiAttribute Utility Functions Decision Networks The Value of Information Summary.
Models for Strategic Marketing Decision Making. Market Entry Decisions To enter first or to wait Sources of First-Mover Advantages –Technological leadership.
Chapter 16: Making Simple Decision March 23, 2004.
Decision Making Under Uncertainty CMSC 471 – Spring 2014 Class #12– Thursday, March 6 R&N, Chapters , material from Lise Getoor, Jean-Claude.
IE 2030 Lecture 7 Decision Analysis Expected Value Utility Decision Trees.
BUAD306 Chapter 5S – Decision Theory. Why DM is Important The act of selecting a preferred course of action among alternatives A KEY responsibility of.
1 Optimizing Decisions over the Long-term in the Presence of Uncertain Response Edward Kambour.
Web-Mining Agents Agents and Rational Behavior Decision-Making under Uncertainty Simple Decisions Ralf Möller Universität zu Lübeck Institut für Informationssysteme.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Making Simple Decisions Chapter 16 Some material borrowed from Jean-Claude Latombe and Daphne Koller by way of Marie desJadines,
CS 188: Artificial Intelligence Spring 2009 Lecture 20: Decision Networks 4/2/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein.
Chapter 16 March 25, Probability Theory: What an agent should believe based on the evidence Utility Theory: What the agent wants Decision Theory:
1 Automated Planning and Decision Making 2007 Automated Planning and Decision Making Prof. Ronen Brafman Various Subjects.
Decision Making ECE457 Applied Artificial Intelligence Spring 2007 Lecture #10.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Nevin L. Zhang Room 3504, phone: ,
Ralf Möller Universität zu Lübeck Institut für Informationssysteme
Maximum Expected Utility
Making Simple Decisions
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #10
Chapter 10 (part 3): Using Uncertain Knowledge
ECE457 Applied Artificial Intelligence Spring 2008 Lecture #10
Bayes Net Learning: Bayesian Approaches
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Expectimax Lirong Xia. Expectimax Lirong Xia Project 2 MAX player: Pacman Question 1-3: Multiple MIN players: ghosts Extend classical minimax search.
CS 4/527: Artificial Intelligence
Decision Theory: Single Stage Decisions
Decision Making under Uncertainty and Risk
Structure and Semantics of BN
Decision Theory: Single Stage Decisions
13. Acting under Uncertainty Wolfram Burgard and Bernhard Nebel
Structure and Semantics of BN
Making Simple Decisions
CS 188: Artificial Intelligence Fall 2007
Decision Making.
Making Simple Decisions
Presentation transcript:

Making Simple Decisions Chapter 16

Topics Decision making under uncertainty Expected utility Utility theory and rationality Utility functions Multi-attribute utility functions Preference structures Decision networks Value of information

Uncertain Outcome of Actions Some actions may have uncertain outcomes Action: spend $10 to buy a lottery which pays $10,000 to the winner Outcome: {win, not-win} Each outcome is associated with some merit (utility) Win: gain $9990 Not-win: lose $10 There is a probability distribution associated with the outcomes of this action (0.0001, 0.9999). Should I take this action?

Expected Utility Random variable X with n values x1,…,xn and distribution (p1,…,pn) X is the outcome of performing action A (i.e., the state reached after A is taken) Function U of X U is a mapping from states to numerical utilities (values) The expected utility of performing action A is EU[A] = Si=1,…,n p(xi|A)U(xi) Probability of each outcome Utility of each outcome

One State/One Action Example U(A1) = 100 x 0.2 + 50 x 0.7 + 70 x 0.1 = 20 + 35 + 7 = 62 s3 s2 s1 0.2 0.7 0.1 100 50 70

One State/Two Actions Example U1(A1) = 62 U2(A2) = 74 s0 A1 A2 s3 s2 s1 s4 0.2 0.7 0.1 0.2 0.8 100 50 70 80

Introducing Action Costs U1(S0) = 62 – 5 = 57 U2(S0) = 74 – 25 = 49 s0 A1 A2 -5 -25 s3 s2 s1 s4 0.2 0.7 0.1 0.2 0.8 100 50 70 80

MEU Principle Decision theory: A rational agent should choose the action that maximizes the agent’s expected utility Maximizing expected utility (MEU) is a normative criterion for rational choices of actions Must have complete model of: Actions Utilities States Even if you have a complete model, will be computationally intractable

Comparing outcomes Which is better: A = Being rich and sunbathing where it’s warm B = Being rich and sunbathing where it’s cool C = Being poor and sunbathing where it’s warm D = Being poor and sunbathing where it’s cool Multiattribute utility theory A clearly dominates B: A > B. A > C. C > D. A > D. What about B vs. C? Simplest case: Additive value function (just add the individual attribute utilities) Others use weighted utility, based on the relative importance of these attributes Learning the combined utility function (similar to joint prob. Table)

Decision networks Extend Bayesian nets to handle actions and utilities a.k.a. influence diagrams Make use of Bayesian net inference Useful application: Value of Information

Decision network representation Chance nodes: random variables, as in Bayesian nets Decision nodes: actions that decision maker can take Utility/value nodes: the utility of the outcome state.

Airport example

Airport example II

Evaluating decision networks Set the evidence variables for the current state. For each possible value of the decision node (assume just one): Set the decision node to that value. Calculate the posterior probabilities for the parent nodes of the utility node, using BN inference. Calculate the resulting utility for the action. Return the action with the highest utility.

Exercise: Umbrella network take/don’t take P(rain) = 0.4 Umbrella Weather Lug umbrella Forecast P(lug|take) = 1.0 P(~lug|~take)=1.0 Happiness f w p(f|w) sunny rain 0.3 rainy rain 0.7 sunny no rain 0.8 rainy no rain 0.2 U(lug, rain) = -25 U(lug, ~rain) = 0 U(~lug, rain) = -100 U(~lug, ~rain) = 100

Value of Perfect Information (VPI) How much is it worth to observe (with certainty) a random variable X? Suppose the agent’s current knowledge is E. The value of the current best action  is: EU(α | E) = maxA ∑i U(Resulti(A)) p(Resulti(A) | E, Do(A)) The value of the new best action after observing the value of X is: EU(α’ | E,X) = maxA ∑i U(Resulti(A)) p(Resulti(A) | E, X, Do(A)) …But we don’t know the value of X yet, so we have to sum over its possible values The value of perfect information for X is therefore: VPI(X) = ( ∑k p(xk | E) EU(αxk | xk, E)) – EU (α | E) Expected utility of the best action if we don’t know X (i.e., currently) Expected utility of the best action given that value of X Probability of each value of X

VPI exercise: Umbrella network What’s the value of knowing the weather forecast before leaving home? take/don’t take P(rain) = 0.4 Umbrella Weather Lug umbrella Forecast P(lug|take) = 1.0 P(~lug|~take)=1.0 Happiness f w p(f|w) sunny rain 0.3 rainy rain 0.7 sunny no rain 0.8 rainy no rain 0.2 U(lug, rain) = -25 U(lug, ~rain) = 0 U(~lug, rain) = -100 U(~lug, ~rain) = 100