Graphical Models in Brief

Slides:



Advertisements
Similar presentations
The Monty Hall Problem Madeleine Jetter 6/1/2000.
Advertisements

ABC Welcome to the Monty Hall show! Behind one of these doors is a shiny new car. Behind two of these doors are goats Our contestant will select a door.
The Monty Hall Problem. Warm up example (from Mondays In Class Problems) Suppose there are 50 red balls and 50 blue balls in each of two bins (200 balls.
Probability and Statistics
More on Probability by David Palay. Review In how many ways can the debate team choose a president and a secretary if there are 10 people on the team?
1 Knowledge Engineering for Bayesian Networks. 2 Probability theory for representing uncertainty l Assigns a numerical degree of belief between 0 and.
Introduction of Probabilistic Reasoning and Bayesian Networks
BEE3049 Behaviour, Decisions and Markets Miguel A. Fonseca.
Bayesian Belief Networks
Propagation in Poly Trees Given a Bayesian Network BN = {G, JDP} JDP(a,b,c,d,e) = p(a)*p(b|a)*p(c|e,b)*p(d)*p(e|d) a d b e c.
CSE (c) S. Tanimoto, 2008 Bayes Nets 1 Probabilistic Reasoning With Bayes’ Rule Outline: Motivation Generalizing Modus Ponens Bayes’ Rule Applying.
I The meaning of chance Axiomatization. E Plurbus Unum.
Learning Goal 13: Probability Use the basic laws of probability by finding the probabilities of mutually exclusive events. Find the probabilities of dependent.
Probability and Statistics Review Thursday Sep 11.
What is the probability that it will snow on Christmas day in Huntingdon?
Probability, Bayes’ Theorem and the Monty Hall Problem
Artificial Intelligence CS 165A Tuesday, November 27, 2007  Probabilistic Reasoning (Ch 14)
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
An Intuitive Explanation of Bayes' Theorem By Eliezer Yudkowsky.
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
Soft Computing Lecture 17 Introduction to probabilistic reasoning. Bayesian nets. Markov models.
Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
Bayesian Learning By Porchelvi Vijayakumar. Cognitive Science Current Problem: How do children learn and how do they get it right?
Renaissance Risk Changing the odds in your favour Risk forecasting & examples.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
Introduction to Bayesian Networks
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 12th Bayesian network and belief propagation in statistical inference Kazuyuki Tanaka.
Topic 2: Intro to probability CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally from the class text, Probability and Statistics for Engineering.
Monty Hall problem. Joint probability distribution  In the study of probability, given two random variables X and Y, the joint distribution of X and.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Conditional Probability Mass Function. Introduction P[A|B] is the probability of an event A, giving that we know that some other event B has occurred.
Basics on Probability Jingrui He 09/11/2007. Coin Flips  You flip a coin Head with probability 0.5  You flip 100 coins How many heads would you expect.
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
CPSC 7373: Artificial Intelligence Lecture 5: Probabilistic Inference Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Monty Hall This is a old problem, but it illustrates the concept of conditional probability beautifully. References to this problem have been made in much.
Ray Karol 2/26/2013. Let’s Make a Deal Monte Hall Problem Suppose you’re on a game show, and you’re given a choice of three doors: Behind one door is.
Graduate School of Information Sciences, Tohoku University
A Brief Introduction to Bayesian networks
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Qian Liu CSE spring University of Pennsylvania
Confidence Intervals and Limits
Artificial Intelligence
The Monty Hall Problem Madeleine Jetter 6/1/2000.
Monty Hall This is a old problem, but it illustrates the concept of conditional probability beautifully. References to this problem have been made in much.
Discrete Math for CS CMPSC 360 LECTURE 32 Last time: Review. Today:
A Problem that will delight your fancy
WARM - UP The American Red Cross says that about 45% of the US Population has Type O blood, 40% Type A, 11% Type B, and the rest Type AB. a.) Selecting.
Belief Propagation: An Extremely Rudimentary Discussion
Uncertainty in AI.
Probability Topics Random Variables Joint and Marginal Distributions
Propagation Algorithm in Bayesian Networks
CS 188: Artificial Intelligence Fall 2008
Luger: Artificial Intelligence, 5th edition
Bayesian Statistics and Belief Networks
CS 188: Artificial Intelligence
Lecture 2: Probability.
Causal Models Lecture 12.
Graduate School of Information Sciences, Tohoku University
Approximate Inference by Sampling
The Monty Hall Game PLAY Teacher’s Notes.
Bayesian networks Chapter 14 Section 1 – 2.
Probability Rules.
Graduate School of Information Sciences, Tohoku University
Graduate School of Information Sciences, Tohoku University
Let’s Win a Lollipop! Door 1 Door 2 Door 3 Win! You pick...and stay with and the Lose.   You pick...and stay with... Door 1 Door 2 Door 3 ...and.
Probabilistic Reasoning With Bayes’ Rule
Presentation transcript:

Graphical Models in Brief

A B Directed Graphs Word equivalents: Called a Bayesian Network The outcome of A influences the outcome of B A influences B A “causes” B A effects B Knowledge of A is relevant for my belief about B Called a Bayesian Network

A A = PMF Equivalents and Common Graph Motifs “node” The probability of A: “node” A node represents a probability table: Pr(A) A: yes 0.28 maybe 0.10 no 0.61 A = A prior (unconditional) “node”

A B A B = = PMF Equivalents and Common Graph Motifs Graph defines two probability tables: Pr(A) A: yes 0.28 maybe 0.10 no 0.61 A = Pr(B|A) A:   yes maybe no B: low 0.28 0.03 medium 0.18 0.13 high 0.01 0.1 0.24 B =

A B C PMF Equivalents and Common Graph Motifs How many table do we need to represent Pr(A, B, C)?

A B A B PMF Equivalents and Common Graph Motifs Note: The “causal” direction is not uniquely defined. We can choose it to be intuitive or convenient

PMF Equivalents and Common Graph Motifs B B A C

A B C A B C PMF Equivalents and Common Graph Motifs Cycles NOT ALLOWED! C

Directed Graphs Typical Typical Not Typical Hypothesis: not directly observable or only observable at high cost Data: information that reveals something about the state of a hypothesis Hypothesis Data Typical Hypothesis Hypothesis Typical Hypothesis Data Not Typical

Directed Graphs It helps me to remember: Disease Symptoms Typical

PMF Equivalents and common graph motifs In general, for a graph consisting of nodes in the set: The joint PMF is (product/chain rule): Parent nodes of Ai This equation defined the Bayesian Network DAG

Question: Is it to your advantage to switch your choice? Example: Monty Hall Problem In the “Let’s Make a Deal” game show, a version of the Choose a Door game is (Monty himself pointed out that there are many variations depending on what his mood was):   You are given the choice of three doors: Behind one door the real prize, a car. Behind the others, goats or other gag prizes. You pick a door, say No. 1. The host (who knows what's behind the doors) opens another door, say No. 3, which has a goat. He then says to you, “Do you want to switch to door No. 2?” Question: Is it to your advantage to switch your choice?

Example: Monty Hall Problem Nodes: P = Prize is behind door # C = Your choice of door # M = Monty’s choice of door # Prize is Behind Door # Your Choice of Door #: Joint PMF for the scenario: Monty Hall Chooses Door #: What are the dependencies between the nodes? Your choice of door affects Monty’s choice of door The door the prize is behind affects Monty’s choice of door Your choice of door is not affected by anything in this scenario The door the prize is behind is not affected by anything in this scenario

Example: Monty Hall Problem Task: Marginalize nodes Knowing the probability table of each node, compute all marginal probabilities: Marginals of prior nodes are just their tables. Prize is Behind Door # Your Choice of Door #: Monty Hall Chooses Door #: Marginals of conditional nodes generally requires software: Graph theoretic operations Some form of Pearl’s “Message Passing Algorithm”

Monty Hall Chooses Door #: Example: Monty Hall Problem   Pr(P) Prize is Behind Door #: door 1 0.333 door 2 door 3   Pr(C) Your Choice of Door #: door 1 0.333 door 2 door 3 Prize is Behind Door # Your Choice of Door #: Monty Hall Chooses Door #: Pr(M|P,C) Prize is Behind Door #: door 1 door 2 door 3 Your Choice of Door #: Monty Hall Chooses Door #: 0.5 1

Monty Hall Chooses Door #: Example: Monty Hall Problem   Pr(P) Prize is Behind Door #: door 1 0.333 door 2 door 3   Pr(C) Your Choice of Door #: door 1 0.333 door 2 door 3 Prize is Behind Door # Your Choice of Door #: Monty Hall Chooses Door #: Pr(M|P,C) Prize is Behind Door #: door 1 door 2 door 3 Your Choice of Door #: Monty Hall Chooses Door #: 0.5 1

Example: Monty Hall Problem Task: Update marginals of nodes after data is collected After introducing “evidence” or “observations” how are our beliefs about the states of the other nodes changed? Prize is Behind Door # Your Choice of Door #: door #1 Monty Hall Chooses Door #: door #3

Example: Monty Hall Problem Updated beliefs about which door to pick door #1 door #3

Software: SamIam Draw dependency arrows Compute marginals/updates Draw nodes

Software: SamIam Enter evidence by clicking on observed states Switch back to edit mode Switch on node histograms

Software: GeNIe Draw dependency arrows Draw nodes Compute marginals/updates

Software: GeNIe Switch on node histograms Toggle update immediately Enter evidence by clicking on observed states