Axioms Let W be statements known to be true in a domain An axiom is a rule presumed to be true An axiomatic set is a collection of axioms Given an axiomatic.

Slides:



Advertisements
Similar presentations
Completeness and Expressiveness
Advertisements

Tests of Hypotheses Based on a Single Sample
Utility theory U: O-> R (utility maps from outcomes to a real number) represents preferences over outcomes ~ means indifference We need a way to talk about.
Reaching Agreements II. 2 What utility does a deal give an agent? Given encounter  T 1,T 2  in task domain  T,{1,2},c  We define the utility of a.
Making Simple Decisions
Utility Theory.
Decision Making Under Uncertainty CSE 495 Resources: –Russell and Norwick’s book.
3. Basic Topics in Game Theory. Strategic Behavior in Business and Econ Outline 3.1 What is a Game ? The elements of a Game The Rules of the.
Subjective Probability Lecture Topics: Calculating subjective probabilities The de Finetti game.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Risk Attitude Dr. Yan Liu
Abby Yinger Mathematics o Statistics o Decision Theory.
Agent Technology for e-Commerce Appendix A: Introduction to Decision Theory Maria Fasli
Summary So Far Extremes in classes of games: –Nonadversarial, perfect information, deterministic –Adversarial, imperfect information, chance  Adversarial,
Behavioral Finance Uncertain Choices February 18, 2014 Behavioral Finance Economics 437.
Utility Axioms Axiom: something obvious, cannot be proven Utility axioms (rules for clear thinking)
CHAPTER 14 Utility Axioms Paradoxes & Implications.
1 Utility Theory. 2 Option 1: bet that pays $5,000,000 if a coin flipped comes up tails you get $0 if the coin comes up heads. Option 2: get $2,000,000.
Planning under Uncertainty
Probability.
Cooperating Intelligent Systems Utility theory Chapter 16, AIMA.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
Making Simple Decisions Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 16.
PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Normative Decision Theory A prescriptive theory for how decisions should be made.
Decision Making Under Uncertainty Russell and Norvig: ch 16 CMSC421 – Fall 2006.
Computability Thank you for staying close to me!! Learning and thinking More algorithms... computability.
CMSC 671 Fall 2003 Class #26 – Wednesday, November 26 Russell & Norvig 16.1 – 16.5 Some material borrowed from Jean-Claude Latombe and Daphne Koller by.
Class 36: Proofs about Unprovability David Evans University of Virginia cs1120.
COMP14112: Artificial Intelligence Fundamentals L ecture 3 - Foundations of Probabilistic Reasoning Lecturer: Xiao-Jun Zeng
Intelligent Environments1 Computer Science and Engineering University of Texas at Arlington.
The Development of Decision Analysis Jason R. W. Merrick Based on Smith and von Winterfeldt (2004). Decision Analysis in Management Science. Management.
Decision Analysis (cont)
Complexity and Emergence in Games (Ch. 14 & 15). Seven Schemas Schema: Conceptual framework concentrating on one aspect of game design Schemas: –Games.
Extending the Definition of Exponents © Math As A Second Language All Rights Reserved next #10 Taking the Fear out of Math 2 -8.
Making Simple Decisions
Great Theoretical Ideas in Computer Science about AWESOME Some Generating Functions Probability Infinity MATH Some Formal Logic (which is really.
Advanced AI Rob Lass July 26, Administrative WebCT Problems? Why aren’t homeworks graded!? Midterm next Wed (Aug. 2nd)
Making Simple Decisions Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 16.
Course Overview and Road Map Computability and Logic.
Physics 270 – Experimental Physics. Let say we are given a functional relationship between several measured variables Q(x, y, …) x ±  x and x ±  y What.
Lecture 3 on Individual Optimization Uncertainty Up until now we have been treating bidders as expected wealth maximizers, and in that way treating their.
Sampling distributions rule of thumb…. Some important points about sample distributions… If we obtain a sample that meets the rules of thumb, then…
Godel’s proof Danny Brown. Outline of godel’s proof 1.Create a statement that says of itself that it is not provable 2.Show that this statement is provable.
Decision theory under uncertainty
Naïve Set Theory. Basic Definitions Naïve set theory is the non-axiomatic treatment of set theory. In the axiomatic treatment, which we will only allude.
© 2005 Pearson Education Canada Inc Chapter 17 Choice Making Under Uncertainty.
Making Simple Decisions Utility Theory MultiAttribute Utility Functions Decision Networks The Value of Information Summary.
Chapter 16: Making Simple Decision March 23, 2004.
Axiomatic Theory of Probabilistic Decision Making under Risk Pavlo R. Blavatskyy University of Zurich April 21st, 2007.
Decision Making Under Uncertainty CMSC 471 – Spring 2014 Class #12– Thursday, March 6 R&N, Chapters , material from Lise Getoor, Jean-Claude.
Expected Value, Expected Utility & the Allais and Ellsberg Paradoxes
1 Hypothesis Testing Basic Problem We are interested in deciding whether some data credits or discredits some “hypothesis” (often a statement about the.
2.3. Value of Information: Decision Trees and Backward Induction.
Web-Mining Agents Agents and Rational Behavior Decision-Making under Uncertainty Simple Decisions Ralf Möller Universität zu Lübeck Institut für Informationssysteme.
Making Simple Decisions Chapter 16 Some material borrowed from Jean-Claude Latombe and Daphne Koller by way of Marie desJadines,
1 BAMS 517 – 2011 Decision Analysis -IV Utility Failures and Prospect Theory Martin L. Puterman UBC Sauder School of Business Winter Term
Decision Making ECE457 Applied Artificial Intelligence Spring 2007 Lecture #10.
Risk Efficiency Criteria Lecture XV. Expected Utility Versus Risk Efficiency In this course, we started with the precept that individual’s choose between.
1 Decision Analysis Without Consequences, Bayesianism Without Probabilities Robert Nau Fuqua School of Business Duke University Presentation for RUD 2005.
Behavioral Finance Preferences Part I Feb 16 Behavioral Finance Economics 437.
Uncertainty and Games (Ch. 15). Uncertainty If a game outcome is certain can it achieve meaningful play? –Example of such a game? Two kinds of uncertainty:
Decisions under uncertainty and risk
Ralf Möller Universität zu Lübeck Institut für Informationssysteme
Homework: Friday Read Section 4.1. In particular, you must understand the proofs of Theorems 4.1, 4.2, 4.3, and 4.4, so you can do this homework. Exercises.
Lecture 3 Axioms & elicitation.
Rational Decisions and
Utility Theory Decision Theory.
13. Acting under Uncertainty Wolfram Burgard and Bernhard Nebel
Behavioral Finance Economics 437.
Making Simple Decisions
Presentation transcript:

Axioms Let W be statements known to be true in a domain An axiom is a rule presumed to be true An axiomatic set is a collection of axioms Given an axiomatic set A, the domain theory of A, domTH(A) is the collection of all things that can be derived from A

Axioms (II) A problem frequently studied by mathematicians: Given W can we construct a (finite) axiomatic set, A, such that domTH(A) = W? Potential difficulties:  Inconsistency:  Incompleteness: Theorem (Goedel): Any axiomatic set for the Arithmetic is either inconsistent and/or incomplete W  domTh(A) domTh(A)  W

Recap from Previous Class First-order logic is not sufficient for many problems We have only a degree of belief (a probability) Decision Theory = probability theory + utility theory Probability distribution Expected value Conditional probability Axioms of probability Bruno de Finetti’s Theorem Today!

Utility of A Decision CSE 395/495 Resources: –Russell and Norwick’s book

Two Famous Quotes “… so they go in a strange paradox, decided only to be undecided, resolved to be irresolute, adamant for drift, solid for fluidity all powerful to be impotent” Arnauld, 1692 “To judge what one must due to obtain a good or avoid an evil, it is necessary to consider not only the good and the evil itself, but also the probability that it happens or not happen” Churchill, 1937

Utility Function U: States  [0,  ) The utility captures an agent’s preference Given an action A, let Result 1 (A), Result 2 (A), … denote the possible outcomes of A Let Do(A) indicates that action A is executed and E be the available evidence Then, the expected utility EU(A|E): EU(A|E) =  i P(Result i (A) | E, Do(A)) U(Result i (A))

Principle of Maximum Expected Utility (MEU) An agent should choose an action that maximizes EU Suppose that taken actions update probabilities of states/actions. Which actions should be taken? ? ? ? 1. Calculate probabilities of current state 2. Calculate probabilities of the actions 3. Select action with the highest expected utility MEU says choose A for state S such that for any other action A’ if E is the known evidence in S, then EU(A|E)  EU(A’|E)

MEU Doesn’t Solve All AI Problems Difficulties: EU(A|E) =  i P(Result i (A) | E, Do(A))U(Result i (A)) State and U(Result i (A)) might not be known completely Computing P(Result i (A) | E, Do(A)) requires a causal model. Computing it is NP-complete However: It is adequate if the utility function reflects the performance by which one’s behavior are judged Example? Grade vs. knowledge

Lotteries We will define the semantics of preferences to define the utility Preferences are defined on scenarios, called lotteries A lottery L with two possible outcomes: A with probability p and B with probability (1– p), written L =[p, A; (1– p), B] The outcome of a lottery can be an state or another lottery

Preferences Let A and B be states and/or lotteries, then: A  B denotes A is preferred to B A ~ B denotes A is indifferent to B A  B denotes either A  B or A ~ B

Axioms of the Utility Theory Orderability Transitivity Continuity Substitutability A  B or B  A or A ~ B If A  B and B  C then A  C If A  B  C then p exists such that [p, A; (1– p), C] If A ~ B then for any C [p, A; (1– p), C] ~ ~ B [p, B; (1– p), C]

Axioms of the Utility Theory (II) Monotonicity Decomposibility (“No fun in gambling”) If A  B then p  q iff [p, A; (1– p), B] [q, A; (1– q), B] [p, A; (1– p), [q, B; (1– q), C] ] ~ [p, A; (1– p)q, B ; (1– p)(1– q), C ] 

Axioms of the Utility Theory (III) Utility principle A  B iff A ~ B iff U(A) > U(B) U(A) = U(B) Maximum Expected Utility principle MEU([p 1,S 1 ; p 2,S 2 ; … ; p n,S n ]) =  i p i U(S i ) U: States  [0,  )

Example Suppose that you are in a TV show and you have already earned 1’ so far. Now, the presentator propose you a gamble: he will flip a coin if the coin comes up heads you will earn 3’ But if it comes up tails you will loose the 1’ What do you decide? First shot: U(winning $X) = X MEU ([0.5,0; 0.5,3’ ]) = 1’ This utility is called the expected monetary value

Example (II) If we use the expected monetary value of the lottery does it take the bet? Yes!, because: MEU([0.5,0; 0.5,3’ ]) = 1’ > MEU([1,1’ ; 0,3’ ]) = 1’ But is this really what you would do? Not me!

Example (III) Second shot: Let S = “my current wealth” S’ = “my current wealth” + $1’ S’’ = “my current wealth” + $3’ MEU(Accept) = MEU(Decline) = 0.5U(S) + 0.5U(S’’) U(S’) 0.5U(S) + 0.5U(S’’) U(S’) If U(S) = 5, U(S’) = 8, U(S’’) = 10, would you accept the bet? No! = 7.5 = 8 $ U

Human Judgment and Utility Decision theory is a normative theory: describe how agents should act Experimental evidence suggest that people violate the axioms of utility Tversky and Kahnerman (1982) and Allen (1953):  Experiment with people  Choice was given between A and B and then between C and D: A: 80% chance of $4000 B: 100% chance of $3000 C: 20% chance of $4000 D: 25% chance of $3000

Human Judgment and Utility (II) Majority choose B over A and C over D If U($0) = 0 MEU([0.8,4000; 0.2,0]) = MEU([1,3000; 0,4000]) = 0.8U($4000) U($3000) Thus, 0.8U($4000) < U($3000) MEU([0.2,4000; 0.8,0]) = MEU([0.25,3000; 0.65, 0]) = 0.2U($4000) 0.25U($3000) Thus, 0.2U($4000) > 0.25U($3000) Thus, there cannot be no utility function consistent with these values

Human Judgment and Utility (III) The point is that it is very hard to model an automatic agent that behaves like a human (back to the Turing test) However, the utility theory does give some formal way of model decisions and as such is used to support user’s decisions Same can be said for similarity in CBR

Homework You saw the discussion on the utility relative to the “talk show” example. Do again an analysis of the airport location that was given last class but this time around your discussion should be centered around how to define utility rather than the expected monetary value.