CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007.

Slides:



Advertisements
Similar presentations
Probability.
Advertisements

Unit 3: Probability 3.1: Introduction to Probability
What can we say about probability? It is a measure of likelihood, uncertainty, possibility, … And it is a number, numeric measure.
Fundamentals of Probability
Lecture Slides Elementary Statistics Eleventh Edition
STATISTICS Random Variables and Distribution Functions
Probability of Independent Events
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
BUS 220: ELEMENTARY STATISTICS
Probability I Introduction to Probability
Theoretical Probability
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Year 6 mental test 5 second questions
CS1512 Foundations of Computing Science 2 Lecture 20 Probability and statistics (2) © J R W Hunter,
CS CS1512 Foundations of Computing Science 2 Lecture 22 Digital Simulations Probability and.
CS CS1512 Foundations of Computing Science 2 Lecture 23 Probability and statistics (4) © J.
CS CS1512 Foundations of Computing Science 2 Lecture 4 Probability and statistics (4) © J R W Hunter, 2006; C J van Deemter 2007.
Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory Radiobioassay and Radiochemical Measurements Conference October 29, 2009.
Chapter 7 Sampling and Sampling Distributions
5.1 Probability of Simple Events
1 Econ 240A Power Three. 2 Summary: Week One Descriptive Statistics –measures of central tendency –measures of dispersion Distributions of observation.
Chapter 4 Basic Probability
Lecture 18 Dr. MUMTAZ AHMED MTH 161: Introduction To Statistics.
5-1 Chapter 5 Probability 1.
ABC Technology Project
MAT 103 Probability In this chapter, we will study the topic of probability which is used in many different areas including insurance, science, marketing,
Business and Economics 6th Edition
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
Chris Morgan, MATH G160 January 8, 2012 Lecture 13
Chapter 2.3 Counting Sample Points Combination In many problems we are interested in the number of ways of selecting r objects from n without regard to.
Hypothesis Tests: Two Independent Samples
40S Applied Math Mr. Knight – Killarney School Slide 1 Unit: Probability Lesson: PR-L1 Intro To Probability Intro to Probability Learning Outcome B-4 PR-L1.
© 2012 National Heart Foundation of Australia. Slide 2.
Science as a Process Chapter 1 Section 2.
Probability Ch 14 IB standard Level.
Probability: The Study of Randomness Chapter Randomness Think about flipping a coin n times If n = 2, can have 2 heads (100% heads), 1 heads and.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
PSSA Preparation.
Testing Hypotheses About Proportions
Lecture 13 Elements of Probability CSCI – 1900 Mathematics for Computer Science Fall 2014 Bill Pine.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
Chapter 14: From Randomness to Probability
Copyright © 2006 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Commonly Used Distributions
Pick Me. Basic Probability Population – a set of entities concerning which statistical inferences are to be drawn Random Sample – all member of population.
Chapter 4 Probability and Probability Distributions
From Randomness to Probability
Questions, comments, concerns? Ok to move on? Vocab  Trial- number of times an experiment is repeated  Outcomes- different results possible  Frequency-
MAT 103 Probability In this chapter, we will study the topic of probability which is used in many different areas including insurance, science, marketing,
Unit 32 STATISTICS.
Probability Sample Space Diagrams.
Learning Objectives for Section 8.1 Probability
Section 1 Sample Spaces, Events, and Probability
Chapter 7 Probability. Definition of Probability What is probability? There seems to be no agreement on the answer. There are two broad schools of thought:
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
5.1 Basic Probability Ideas
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Warm-Up 1. What is Benford’s Law?
Copyright © 2010 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
Copyright © 2010 Pearson Education, Inc. Unit 4 Chapter 14 From Randomness to Probability.
From Randomness to Probability Chapter 14. Dealing with Random Phenomena A random phenomenon is a situation in which we know what outcomes could happen,
Introduction  Probability Theory was first used to solve problems in gambling  Blaise Pascal ( ) - laid the foundation for the Theory of Probability.
Chapter 8 Probability Section 1 Sample Spaces, Events, and Probability.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
Essential Ideas for The Nature of Probability
PROBABILITY Probability Concepts
Probability Probability underlies statistical inference - the drawing of conclusions from a sample of data. If samples are drawn at random, their characteristics.
Probability 14.1 Experimental Probability 14.2 Principles of Counting
Probability Probability Principles of EngineeringTM
6.1 Sample space, events, probability
Presentation transcript:

CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007

The story so far ... Some basic concepts of statistics : tools for summarising data kinds of data/variables difference between sample and sample space various kinds of graphs the sample mean, median and mode skew measures of spread: variance and standard deviation upper and lower quartiles Now a related but more theoretical topic: probability

Initial thoughts ½ or ‘1 chance in 2’ or 50% less than ½ or ... A person tosses a coin five times. Each time it comes down heads. What is the probability that it will come down heads on the sixth toss? ½ or ‘1 chance in 2’ or 50% assumed that the coin is ‘fair’ ignored empirical evidence of the first five tosses less than ½ or ... assumed that the coin is ‘fair’ thought about ‘law of averages’ – in the long run, half heads, half tails but dice have no memory! more than ½ or ... cynical – assumed biased coin

Probability can be defined in different ways, some of which are more suitable in particular situations. First, let’s look at a relatively simple situation, based on complete information.

One definition of probability Probability: the extent to which an event is likely to occur, measured by the ratio of the number of favourable outcomes to the total number of possible outcomes. close eyes shake bag put in hand and pick a ball 10 possible balls (outcomes) 10 balls in a bag 7 white; 3 red Might there be a reason why any one ball is picked rather than any other? If not, then all outcomes are equally likely.

Probability of picking a red ball favourable outcome = red ball number of favourable outcomes = 3 number of outcomes = 10 probability = 3/10 (i.e. 0.3) Note: probability of picking a white ball is 7/10 (0.7) probabilities lie between 0.0 and 1.0 we have two mutually exclusive outcomes (can’t be both red and white) outcomes are exhaustive (no other colour there) in this case the probabilities add to 1.0 (0.3 + 0.7) a probability is a prediction

Useful concepts Trial action which results in one of several possible outcomes; e.g. picking a ball from the bag Experiment a series of trials (or perhaps just one) e.g. picking a ball from the bag twenty times Event a set of outcomes with something in common e.g. a red ball

Probability derived a priori Suppose each trial in an experiment can result in one (and only one) of n equally likely (as judged by thinking about it) outcomes, r of which correspond to an event E. The probability of event E is: r P(E) = ― n a priori means “without investigation or sensory experience”

More complex a priori probabilities What’s the probability of throwing heads with a fair coin? 1/2

More complex a priori probabilities What’s the probability of throwing heads with a fair coin? ½ What’s the probability of throwing exactly one head with two fair coins?

More complex a priori probabilities What’s the probability of throwing a head with a fair coin? ½ What’s the probability of throwing exactly one head with two fair coins? 1/2

More complex a priori probabilities What’s the probability of throwing heads with a fair coin? ½ What’s the probability of throwing exactly one head with two fair coins? ½ Proof: set of outcomes = {hh,tt,ht,th}, so n=4 favourable outcomes = {ht,th}, so r=2 therefore, P(E)=2/4=1/2

More complex a priori probabilities What’s the probability of throwing heads with a fair coin? ½ What’s the probability of throwing at least one head with two fair coins? 3/4 Proof: set of outcomes = {hh,tt,ht,th}, so n=4 favourable outcomes = {ht,th,hh}, so r=3 therefore, P(E)=3/4

More complex a priori probabilities What’s the probability of throwing 6 with one fair dice? 1/6 What’s the probability of throwing 8 with two fair dice?

More complex a priori probabilities Probability of throwing 8 with two dice: 5 outcomes correspond to event “throwing 8 with two dice” 36 possible outcomes probability = 5/36 1 2 3 4 5 6 1 2 3 4 5 6 7 2 3 4 5 6 7 8 3 4 5 6 7 8 9 4 5 6 7 8 9 10 5 6 7 8 9 10 11 6 7 8 9 10 11 12

Probability derived from experiment Toss a drawing pin in the air - two possible outcomes: point up, point down Can we say a priori what the relative probabilities are? (c.f. ‘fair’ coin) If not, then experimentation is a possible source for determining probabilities. (This gets us back to statistics!) Probability based on relative frequencies (from experimental data) If, in a large number of trials (n), r of these result in an event E, the probability of event E is: r P(E) = ― n

Probability derived from experiment number of trials must be ‘large’ (how large?) trials must be ‘independent’ - the outcome of any one toss must not depend on any previous toss no guarantee that the value of r/n will settle down compare with the relative frequencies for existing data if we have enough experiments then we believe that the relative frequency contains a general ‘truth’ about the system we are observing.

a priori and Experimental Probabilities Ranges and Odds In both cases the minimum probability is 0 a priori – event cannot occur a blue ball is drawn throwing 7 on an ordinary dice experimental – event has not occurred In both cases the maximum probability is 1 a priori – event must occur sun will rise tomorrow (knowledge of physics) experimental – the event always has occurred the sun has risen every day in my experience For two events, express as odds: odds on a red ball are 3/10 to 7/10 i.e. 3 to 7

Question I think there’s a better than evens chance that English Jim will win the 2:50 at Exeter today. How likely is it to snow tomorrow? What kind of probability are we talking about here? a priori? experimental? Neither future event so no experimental evidence (and only happens once) reasoning from first principles – what principles?

Probability of two independent events Experiment trial 1: pick a ball – event E1 = the ball is red replace the ball in the bag trial 2: pick a ball – event E2 = the ball is white What is the probability of the event E: E = first picking a red ball, then a white ball (with replacement). P( E ) = P( E1 and E2 ) or P( E1∩ E2 )

Probability of two independent events Experiment trial 1: pick a ball – event E1 = the ball is red replace the ball in the bag trial 2: pick a ball – event E2 = the ball is white What is the probability of the event E: E = first picking a red ball, then a white ball (with replacement). P( E ) = P( E1 and E2 ) or P( E1∩ E2 ) If the two trials are independent (the outcome of the first trial does not affect the outcome of the second) then: P( E1 and E2 ) = P( E1 )* P( E2 ) - the multiplication law

Probability of two red balls R R R W W W W W W W R Y Y Y N N N N N N N W N N N N N N N N N N 9 favourable outcomes out of 100 possibilities all equally likely probability of two reds = 9/100 ( = 3/10 * 3/10 )

Probability of two non-independent events Experiment trial 1: pick a ball – event E1 = the ball is red do not replace the ball in the bag trial 2: pick a ball – event E2 = the ball is red What is the probability of the event E, where E = picking two red balls on successive trials (without replacement)? In this case, the probability of E2 is affected by the outcome of trial 1. See the following table, which visualises the situation where trial 1 results in E1 being true.

Probability of two red balls second draw – assuming red chosen first time R R W W W W W W W R Y Y N N N N N N N W N N N N N N N N N first draw 6 favourable outcomes out of 90 possibilities all equally likely probability of two reds = 6/90

Conditional probabilities P( E2 | E1 ) = the probability that E2 will happen given that E1 has already happened. If E2 and E1 are (conditionally, statistically) independent then P( E2 | E1 ) = P( E2 ) Be careful about assuming independence.

Probability of two non-independent events Instead of drawing a table, you can also use a more general formula: P( E1 and E2 ) = P( E1 )* P( E2 | E1 ) NB: still the general form of the multiplication law See the multiplication in red in the following slide:

Probability of two red balls second draw – assuming red chosen first time R R W W W W W W W R Y Y N N N N N N N W N N N N N N N N N first draw 6 favourable outcomes out of 90 possibilities all equally likely probability of two reds = 6/90 ( = 3/10 * 2/9 )

Probability of one event or another Experiment bag contains 3 red balls, 7 white balls and 5 blue balls trial: pick a ball event E1 = the ball is red event E2 = the ball is blue - What is the probability of the event E = picking either a red ball or a blue ball? In this case, it’s just P(E1)+P(E2)=3/15+5/15=8/15 - What’s the probability of throwing either 8 or 9 with two dice?

Additive probabilities Probability of throwing 8 (E1) or 9 (E2) with two dice: 1 2 3 4 5 6 1 2 3 4 5 6 7 2 3 4 5 6 7 8 3 4 5 6 7 8 9 4 5 6 7 8 9 10 5 6 7 8 9 10 11 6 7 8 9 10 11 12 These two events are mutually exclusive, so you can simply add up P(E1)+P(E2). 9 outcomes correspond to event E = E1 or E2 36 possible outcomes probability = 9/36 (= ¼ ) = 5/36 + 4/36 = P(8) + P(9)

Probability of two mutually exclusive events If the two events are mutually exclusive (they can’t both happen in the same trial) then: P( E1 or E2 ) = P( E1 ) + P( E2 ) If this cannot be assumed, you need a slightly more complex formula: P( E1 or E2 ) = P( E1 ) + P( E2 ) – P(E1 and E2) Of course if P(E1 and E2) =0 then the two formulas are the same

Probability of two non-mutually exclusive events E = Probability of throwing at least one 5 with two dice E1 = 5 on first dice; E2 = 5 on second dice 1 2 3 4 5 6 1 1,1 1,2 1,3 1,4 1,5 1,6 2 2,1 2,2 2,3 2,4 2,5 2,6 3 3,1 3,2 3,3 3,4 3,5 3,6 4 4,1 4,2 4,3 4,4 4,5 4,6 5 5,1 5,2 5,3 5,4 5,5 5,6 6 6,1 6,2 6,3 6,4 6,5 6,6 Note that this is not just P(E1)+P(E2)=1/3, because this would make the mistake of counting (5,5) twice. 11 outcomes correspond to event E E = E1 or E2 or [implicitly] (E1 and E2) 36 possible outcomes probability = 11/36 = 6/36 + 6/36 – 1/36

Counting/Combinatorics Drawing tables is a good way of visualising a set of outcomes. If the number of outcomes is very large, this won’t work very well. Luckily, there is an area of mathematics that focusses on questions of counting. For example, “How many ways are there to pick n elements from a set of m elements (where n<m)”? This is sometimes called Combinatorics (also “Permutations and Combinations”). We’re not going into this here.

Probability so far: summing up Probability as a fraction r / n (i.e., favourable/total ) Visualising all total and all favourable outcomes in a table Relation between an event and its components. E.g., E = E1 and E2 E = E1 or E2 Questions to consider: Are E1 and E2 independent of each other? Are E1 and E2 mutually exclusive? Conditional probability: the probability of E1|E2