Basics of Probability. A Bit Math A Probability Space is a triple, where  is the sample space: a non-empty set of possible outcomes; F is an algebra.

Slides:



Advertisements
Similar presentations
Chapter 6 Sampling and Sampling Distributions
Advertisements

Psychology 290 Special Topics Study Course: Advanced Meta-analysis April 7, 2014.
Copyright © Cengage Learning. All rights reserved.
Week 21 Basic Set Theory A set is a collection of elements. Use capital letters, A, B, C to denotes sets and small letters a 1, a 2, … to denote the elements.
Chapter 4 Probability and Probability Distributions
COUNTING AND PROBABILITY
Chapter 4 Probability.
DATA ANALYSIS Module Code: CA660 Lecture Block 2.
Chapter 7 Sampling and Sampling Distributions
Engineering Probability and Statistics - SE-205 -Chap 2 By S. O. Duffuaa.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Section 4-2 Basic Concepts of Probability.
Chapter 6 Probability.
Lecture II.  Using the example from Birenens Chapter 1: Assume we are interested in the game Texas lotto (similar to Florida lotto).  In this game,
Probability. Randomness Long-Run (limiting) behavior of a chance (non- deterministic) process Relative Frequency: Fraction of time a particular outcome.
Basic Concepts and Approaches
Chapter 1 Basics of Probability.
Welcome to Probability and the Theory of Statistics This class uses nearly every type of mathematics that you have studied so far as well as some possibly.
Chap 20-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 20 Sampling: Additional Topics in Sampling Statistics for Business.
Chapter 8 Probability Section R Review. 2 Barnett/Ziegler/Byleen Finite Mathematics 12e Review for Chapter 8 Important Terms, Symbols, Concepts  8.1.
Dr. Gary Blau, Sean HanMonday, Aug 13, 2007 Statistical Design of Experiments SECTION I Probability Theory Review.
Ex St 801 Statistical Methods Probability and Distributions.
Nor Fashihah Mohd Noor Institut Matematik Kejuruteraan Universiti Malaysia Perlis ІМ ќ INSTITUT MATEMATIK K E J U R U T E R A A N U N I M A P.
Chapter 1 Probability Spaces 主講人 : 虞台文. Content Sample Spaces and Events Event Operations Probability Spaces Conditional Probabilities Independence of.
Basic Concepts of Discrete Probability (Theory of Sets: Continuation) 1.
Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 4 Probability.
Probability Notes Math 309. Sample spaces, events, axioms Math 309 Chapter 1.
Chapter 1 Logic Section 1-1 Statements Open your book to page 1 and read the section titled “To the Student” Now turn to page 3 where we will read the.
CPSC 531: Probability Review1 CPSC 531:Probability & Statistics: Review Instructor: Anirban Mahanti Office: ICT Class.
LECTURE 15 THURSDAY, 15 OCTOBER STA 291 Fall
Introduction to Probability  Probability is a numerical measure of the likelihood that an event will occur.  Probability values are always assigned on.
1 TABLE OF CONTENTS PROBABILITY THEORY Lecture – 1Basics Lecture – 2 Independence and Bernoulli Trials Lecture – 3Random Variables Lecture – 4 Binomial.
LECTURE 14 TUESDAY, 13 OCTOBER STA 291 Fall
Bayesian vs. frequentist inference frequentist: 1) Deductive hypothesis testing of Popper--ruling out alternative explanations Falsification: can prove.
Uncertainty Uncertain Knowledge Probability Review Bayes’ Theorem Summary.
Chapter 4 Probability ©. Sample Space sample space.S The possible outcomes of a random experiment are called the basic outcomes, and the set of all basic.
Week 21 Conditional Probability Idea – have performed a chance experiment but don’t know the outcome (ω), but have some partial information (event A) about.
Week 11 What is Probability? Quantification of uncertainty. Mathematical model for things that occur randomly. Random – not haphazard, don’t know what.
EQT 272 PROBABILITY AND STATISTICS
Topic 2: Intro to probability CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally from the class text, Probability and Statistics for Engineering.
1 CHAPTERS 14 AND 15 (Intro Stats – 3 edition) PROBABILITY, PROBABILITY RULES, AND CONDITIONAL PROBABILITY.
Dr. Ahmed Abdelwahab Introduction for EE420. Probability Theory Probability theory is rooted in phenomena that can be modeled by an experiment with an.
Making sense of randomness
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Probability is a measure of the likelihood of a random phenomenon or chance behavior. Probability describes the long-term proportion with which a certain.
1 CHAPTER 7 PROBABILITY, PROBABILITY RULES, AND CONDITIONAL PROBABILITY.
Probability You’ll probably like it!. Probability Definitions Probability assignment Complement, union, intersection of events Conditional probability.
1 Probability: Liklihood of occurrence; we know the population, and we predict the outcome or the sample. Statistics: We observe the sample and use the.
Copyright © 2014 by McGraw-Hill Higher Education. All rights reserved. Essentials of Business Statistics: Communicating with Numbers By Sanjiv Jaggia and.
Lecture 7 Dustin Lueker. 2STA 291 Fall 2009 Lecture 7.
Transient Unterdetermination and the Miracle Argument Paul Hoyningen-Huene Leibniz Universität Hannover Center for Philosophy and Ethics of Science (ZEWW)
+ Chapter 5 Overview 5.1 Introducing Probability 5.2 Combining Events 5.3 Conditional Probability 5.4 Counting Methods 1.
Stat 1510: General Rules of Probability. Agenda 2  Independence and the Multiplication Rule  The General Addition Rule  Conditional Probability  The.
1 Chapter 4, Part 1 Basic ideas of Probability Relative Frequency, Classical Probability Compound Events, The Addition Rule Disjoint Events.
Week 21 Rules of Probability for all Corollary: The probability of the union of any two events A and B is Proof: … If then, Proof:
PROBABILITY AND BAYES THEOREM 1. 2 POPULATION SAMPLE PROBABILITY STATISTICAL INFERENCE.
Copyright © Cengage Learning. All rights reserved. 2 Probability.
Warsaw Summer School 2015, OSU Study Abroad Program Normal Distribution.
Chapter 8: Probability: The Mathematics of Chance Probability Models and Rules 1 Probability Theory  The mathematical description of randomness.  Companies.
1 Probability- Basic Concepts and Approaches Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering EMIS 7370/5370 STAT 5340 : PROBABILITY AND.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 4 Probability.
Sample Space and Events Section 2.1 An experiment: is any action, process or phenomenon whose outcome is subject to uncertainty. An outcome: is a result.
Sampling Distributions Chapter 18. Sampling Distributions A parameter is a number that describes the population. In statistical practice, the value of.
Chapter 3: Probability Topics
Chapter 4 Probability.
What is Probability? Quantification of uncertainty.
Chapter 4 – Part 3.
Inductive and Deductive Logic
basic probability and bayes' rule
Chapter 1 Probability Spaces
Presentation transcript:

Basics of Probability

A Bit Math A Probability Space is a triple, where  is the sample space: a non-empty set of possible outcomes; F is an algebra (a.k.a field) on , that is to say, F is a set of subsets of  that contains  as a member and is closed under union and complementation. P is a function from F to real numbers: such that (1) P(S)  0, for all S  F; (2) P(  ) = 1; (3) for all A, B  F, if A  B= , then P(A  B) = P(A) + P(B)

Sample space:  ={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}. Algebra: F = P (  ), the set of all subsets of . Technically members of F are usually referred to as events. But they can also express properties, propositions, etc. e.g. event: “a black object being selected” – {1,2,3,4,5,6,7,8,9} property: “Showing a letter A” – {1,2,7,10,12} proposition: “the object selected is both round and black” – {7, 8, 9} Example AABBBB A AB BABB

Conditional Probability Define P(A | B) = P(A  B) / P(B), for P(B) > 0. Bayes theorem: P(B | A) = [P(A | B) P(B)] / P(A), Law of total probability: Let B 1, …, B m  F be a partition of , that is to say, B i  B j =  for all i  j, and B 1 , …,  B m = , P(A) =  i P(A  B i ) =  i P(A | B i ) P(B i ) Independence: two events A, B are said to be independent if P(A|B) = P(A) (which implies that P(B|A) = P(B)). Conditional Independence: A, B are independent conditional on C if P(A | B, C) = P(A | C)

Suppose one object is chosen randomly from the 13 objects. Consider the following events: A (having a letter A), B (having a letter B), s (being a square), w (being white). P(A) = 5/13, P(A & s) = 3/13, P(s | A) = P(A & s) / P(A) = 3/5. P(s) = P(A) P(s | A) + P(B) P(s | B) = 8/13 P(A | s) = [P(A) P(s | A)] / P(s) = 3 / 8 A and s are not independent. P (A | w) = P (A | s, w) = 1/2. So J and s are independent conditional on w. Example AABBBB A AB BABB

Different Concepts of Probability Probability as relative frequency Probability as propensity Probability as degrees of logical entailment Probability as degrees of belief

Probability as Relative Frequency Probability is some sort of relative frequency: - relative frequency in a finite population? - limiting relative frequency in an infinite sequence? - relative frequency in a “suitably” large population? Problem of single case: how to make sense of probability statement about a singular case? Tempting answer: relative frequency of passing in a reference class of similar cases. Problem of reference class: one case can belong to multiple reference classes. One answer: narrowest class with enough data.

Probability as Propensity Probability is a propensity or tendency of a chance set-up to produce a certain outcome. It seems natural to talk about single-case probabilities (usually known as chances), but only relative to some chance set-up. The chance set-up needs to be stable and genuinely chancy to admit non-degenerate probability values. Relative frequencies are still relevant for measuring propensity.

Logical Probability Probability is degree of partial entailment or confirmation between propositions. Motivated by the aspiration of generalizing deductive logic into an inductive logic. Given such a logic, we can talk about valid inductive arguments: they are arguments in which the premises confer the right logical (or inductive) probability to the conclusion. There are then many logical probabilities associated with a proposition, relative to different “evidence” propositions. Principle of Total Evidence: the logical probability relative to all your evidence is your “right” credence.

Subjective Probability Probability is degree of belief of some agent. Assumed to have a nice correspondence with one’s betting behavior. Dutch-book argument: your fair betting odds satisfies the probability calculus if and only if nobody can “book” you by designing a combination of bets acceptable to you that entails a sure loss on your part. Various proposals of putting more constraint on subjective probability based on relative frequency, chance or logical probability.

Three Approaches to Statistical Explanation

Hempel’s I-S Model Basic idea: a statistical explanation of a particular event is a correct inductive argument that contains essentially a statistical law in the premises, and confers a high (logical or inductive) probability to the explanandum. Generic form: p (G; F) = r (statistical law) Fa (particular facts about a) Ga (the explanandum) Note that the conclusion of the (inductive) argument is not a probabilistic statement, but rather the very explanandum, the sentence describing the particular event to be explained. (r)

The Problem of Ambiguity One problem is that there can be two correct inductive arguments with high inductive probability but contradictory conclusions. The root of the problem is that it is possible to have two true statistical laws: p(G; F) = r 1 and p(G; F&E) = r 2 such that r 1 is very high, but r 2 is very low. You may read the above two laws this way: “Relative to class F (or among objects with property F), the probability of G is high (r 1 )”, and “Relative to class F&E (or among objects with both F and F), the probability of G is low (r 2 )” It looks like a problem of reference class.

Maximal Specificity It is then natural to suggest the following solution: choose the most specific or the narrowest “reference class”. This is essentially the requirement of maximal specificity imposed by Hempel. Note that his “maximal specificity” is relative to a knowledge situation. That is, the requirement is that the statistical law used in an I-S explanation must be the most specific statistical law we know that governs the case in question. (There might be even more specific but unknown statistical law that governs the case.) So I-S explanations are always relative to a knowledge or epistemic situation.

Railton’s D-N-P Model Basic Idea: The argument central to a statistical explanation is still deductive, not inductive. The relevant statistical laws are “universal laws about chances”. Generic form:  x (Fx  p(G(x)) = r) (statistical law) Fa (particular facts about a) p(Ga) = r (chance of the explanandum) Note that the conclusion of the argument is not the explanandum itself, but a sentence describing the chance of the particular event in question.

Salmon’s S-R Approach Basic idea: a statistical explanation is not an argument at all, but is rather (based upon) an assembly of all (and only) facts that are statistically relevant to the explanandum, and how the probability of explanandum depends on the relevant factors. Salmon gave 8 steps/conditions for constructing the S-R basis. - Condition 5 essentially requires that all statistically relevant factors are considered in the partition. (An analogue to Hempel’s maximal specificity, but without being relativized to a knowledge situation.) - Condition 7 essentially requires that only statistically relevant factors are used in the partition. Also note that all relevant probabilities are calculated in step 4.

A Few More Observations Neither Railton nor Salmon requires high probability of any sort. Neither Railton nor Salmon relativize the concept of statistical explanation to a knowledge situation. Salmon, but not Hempel or Railton, seems to suggest that it is not only important to ascertain the probability (relative frequency with respect to the right reference class or chance given the right chance set-up) of the explanandum, but also crucial to reveal how that probability depends on various factors.