I The meaning of chance Axiomatization. E Plurbus Unum.

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
A Brief Introduction to Bayesian Inference Robert Van Dine 1.
NIPRL Chapter 1. Probability Theory 1.1 Probabilities 1.2 Events 1.3 Combinations of Events 1.4 Conditional Probability 1.5 Probabilities of Event Intersections.
1 Probability Part 1 – Definitions * Event * Probability * Union * Intersection * Complement Part 2 – Rules Part 1 – Definitions * Event * Probability.
Chapter 4 Probability.
A/Prof Geraint Lewis A/Prof Peter Tuthill
Visual Recognition Tutorial
Learning Goal 13: Probability Use the basic laws of probability by finding the probabilities of mutually exclusive events. Find the probabilities of dependent.
1 Basic Probability Statistics 515 Lecture Importance of Probability Modeling randomness and measuring uncertainty Describing the distributions.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
Judgment and Decision Making in Information Systems Probability, Utility, and Game Theory Yuval Shahar, M.D., Ph.D.
Lecture 9: p-value functions and intro to Bayesian thinking Matthew Fox Advanced Epidemiology.
Probability, Bayes’ Theorem and the Monty Hall Problem
Basic Concepts and Approaches
COMP14112: Artificial Intelligence Fundamentals L ecture 3 - Foundations of Probabilistic Reasoning Lecturer: Xiao-Jun Zeng
Statistical Decision Theory
1 Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Basic Principle of Statistics: Rare Event Rule If, under a given assumption,
Chapter 8 Probability Section R Review. 2 Barnett/Ziegler/Byleen Finite Mathematics 12e Review for Chapter 8 Important Terms, Symbols, Concepts  8.1.
Dr. Gary Blau, Sean HanMonday, Aug 13, 2007 Statistical Design of Experiments SECTION I Probability Theory Review.
11-1 Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Probability and Statistics Chapter 11.
Theory of Probability Statistics for Business and Economics.
Basic Concepts of Discrete Probability (Theory of Sets: Continuation) 1.
Math 15 – Elementary Statistics Sections 7.1 – 7.3 Probability – Who are the Frequentists?
Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 4 Probability.
LECTURE 15 THURSDAY, 15 OCTOBER STA 291 Fall
BINOMIALDISTRIBUTION AND ITS APPLICATION. Binomial Distribution  The binomial probability density function –f(x) = n C x p x q n-x for x=0,1,2,3…,n for.
1 TABLE OF CONTENTS PROBABILITY THEORY Lecture – 1Basics Lecture – 2 Independence and Bernoulli Trials Lecture – 3Random Variables Lecture – 4 Binomial.
G. Cowan Lectures on Statistical Data Analysis Lecture 1 page 1 Lectures on Statistical Data Analysis London Postgraduate Lectures on Particle Physics;
1 Reasoning Under Uncertainty Artificial Intelligence Chapter 9.
K. Shum Lecture 6 Various definitions of Probability.
Slide 15-1 Copyright © 2004 Pearson Education, Inc.
Bayesian vs. frequentist inference frequentist: 1) Deductive hypothesis testing of Popper--ruling out alternative explanations Falsification: can prove.
Chapter 4 Probability ©. Sample Space sample space.S The possible outcomes of a random experiment are called the basic outcomes, and the set of all basic.
Copyright © 2010 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
Topic 2: Intro to probability CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally from the class text, Probability and Statistics for Engineering.
Dr. Ahmed Abdelwahab Introduction for EE420. Probability Theory Probability theory is rooted in phenomena that can be modeled by an experiment with an.
Making sense of randomness
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
5.1 Randomness  The Language of Probability  Thinking about Randomness  The Uses of Probability 1.
Sixth lecture Concepts of Probabilities. Random Experiment Can be repeated (theoretically) an infinite number of times Has a well-defined set of possible.
Natural Language Processing Giuseppe Attardi Introduction to Probability IP notice: some slides from: Dan Jurafsky, Jim Martin, Sandiway Fong, Dan Klein.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Review of Statistics I: Probability and Probability Distributions.
BAYES and FREQUENTISM: The Return of an Old Controversy 1 Louis Lyons Imperial College and Oxford University CERN Summer Students July 2014.
- 1 - Outline Introduction to the Bayesian theory –Bayesian Probability –Bayes’ Rule –Bayesian Inference –Historical Note Coin trials example Bayes rule.
LECTURE 16 TUESDAY, 20 OCTOBER STA 291 Fall
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
1 Probability- Basic Concepts and Approaches Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering EMIS 7370/5370 STAT 5340 : PROBABILITY AND.
Copyright © 2010 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 4 Probability.
Sample Space and Events Section 2.1 An experiment: is any action, process or phenomenon whose outcome is subject to uncertainty. An outcome: is a result.
PROBABILITY 1. Basic Terminology 2 Probability 3  Probability is the numerical measure of the likelihood that an event will occur  The probability.
Ray Karol 2/26/2013. Let’s Make a Deal Monte Hall Problem Suppose you’re on a game show, and you’re given a choice of three doors: Behind one door is.
AP Statistics From Randomness to Probability Chapter 14.
Bayesian Learning Reading: Tom Mitchell, “Generative and discriminative classifiers: Naive Bayes and logistic regression”, Sections 1-2. (Linked from.
Probability.
Chapter 3 Probability.
BAYES and FREQUENTISM: The Return of an Old Controversy
Natural Language Processing
Reasoning Under Uncertainty in Expert System
Natural Language Processing
Introduction to Probability
Great Theoretical Ideas In Computer Science
Statistical NLP: Lecture 4
Lecture 2: Probability.
Bayes for Beginners Luca Chech and Jolanda Malamud
A random experiment gives rise to possible outcomes, but any particular outcome is uncertain – “random”. For example, tossing a coin… we know H or T will.
basic probability and bayes' rule
Presentation transcript:

I The meaning of chance Axiomatization

E Plurbus Unum

Fair coin? How do you check whether a coin is fair? Rephrase: how many times do you need to toss a coin in order to be “confident” that it is fair?

Classical definition of probability Pierre Simon Laplace. “Théorie analytique des probabilités” The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

Frequentist view … defines an event's probability as the limit of its relative frequency in a large number of trials. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. Frequentists: Venn (pictured), Fisher, von Mises

Frequentists talk about probabilities only when dealing with well-defined random experiments The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space that you want to consider. For any event only one of two possibilities can happen; it occurs or it does not occur. The relative frequency of occurrence of an event, in a number of repetitions of the experiment, is a measure of the probability of that event.

Monte Hall Problem Monte's dilemma “ Suppose you’re on a game show, and you’re given a choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say number 1, and the host, who knows what’s behind the doors, opens another door, say number. 3, which has a goat. He says to you, ‘Do you want to pick door number 2?’ Is it to your advantage to switch your choice of doors?”

Bayesian probability Bayesian approach treats “probability” as 'a measure of a state of knowledge' -- not as a frequency. objectivist school : rules of Bayesian statistics justified by desiderata of rationality and consistency ; extension of Aristotelian logic. subjectivist school : the state of knowledge corresponds to a 'personal belief' [3]. “Machine learning” methods are based on objectivist Bayesian principles. A probability can be assigned to a hypothesis, -- not possible under the frequentist view -- a hypothesis can only be accepted or rejected. Aristotelian logic : every statement is either true or false Bayesian reasoning: incorporates uncertainty.

Bayes’ rule

Cox’s axioms Cox wanted his system to satisfy the following conditions: 1. Divisibility and comparability - The plausibility of a statement is a real number and is dependent on information we have related to the statement. 2. Common sense - Plausibilities should vary sensibly with the assessment of plausibilities in the model. 3. Consistency - If the plausibility of a statement can be derived in many ways, all the results must be equal.

P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does not take into account any information about B. P(A|B) is the conditional probability of A, given B. It is also called the posterior probability-- it is derived from or depends upon the specified value of B. P(B|A) is the conditional probability of B given A. P(B) is the prior or marginal probability of B, and acts as a normalizing constant.

Bayesian probability interprets the concept of probability as 'a measure of a state of knowledge The frequentist view of probability overshadowed the Bayesian view during the first half of the 20th century due to prominent figures such as Ronald Fisher, JerzyNeyman and Egon Pearson. The word Bayesian appeared in the 1950s, and by the 1960s it became the term preferred by people who sought to escape the limitations and inconsistencies of the frequentist approach to probability theory

Bertrand's paradox

Questions What is the main difference between the classical definition vs frequentist definition of probability? What is the main difference between frequentist probability and Bayesian probability?