COMPSCI 102 Introduction to Discrete Mathematics.

Slides:



Advertisements
Similar presentations
2 4 Theorem:Proof: What shall we do for an undirected graph?
Advertisements

Week 4 – Random Graphs Dr. Anthony Bonato Ryerson University AM8002 Fall 2014.
Chapter 5.1 & 5.2: Random Variables and Probability Mass Functions
Great Theoretical Ideas in Computer Science about AWESOME Some Generating Functions Probability.
Great Theoretical Ideas in Computer Science for Some.
Great Theoretical Ideas in Computer Science.
17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks.
Random Walks. Random Walks on Graphs - At any node, go to one of the neighbors of the node with equal probability. -
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Entropy Rates of a Stochastic Process
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
Prof. Bart Selman Module Probability --- Part d)
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Random Walks Great Theoretical Ideas In Computer Science Anupam GuptaCS Fall 2005 Lecture 23Nov 14, 2005Carnegie Mellon University.
Great Theoretical Ideas in Computer Science.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 24April 8, 2004Carnegie Mellon University.
Lecture 20: April 12 Introduction to Randomized Algorithms and the Probabilistic Method.
Discrete and Continuous Random Variables
Discrete Random Variables: The Binomial Distribution
Probability Theory: Counting in Terms of Proportions Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Fair Games/Expected Value
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 22April 1, 2004Carnegie Mellon University
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2005 Lecture 24April 7, 2005Carnegie Mellon University.
Probability III: The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Avrim BlumCS
Great Theoretical Ideas in Computer Science.
CSCE 411 Design and Analysis of Algorithms Set 9: Randomized Algorithms Prof. Jennifer Welch Fall 2014 CSCE 411, Fall 2014: Set 9 1.
Differential Equations 7. Direction Fields and Euler's Method 7.2.
1.3 Simulations and Experimental Probability (Textbook Section 4.1)
Probability III: The Probabilistic Method Great Theoretical Ideas In Computer Science John LaffertyCS Fall 2006 Lecture 11Oct. 3, 2006Carnegie.
Many random walks are faster than one Noga AlonTel Aviv University Chen AvinBen Gurion University Michal KouckyCzech Academy of Sciences Gady KozmaWeizmann.
DIFFERENTIAL EQUATIONS 10. DIFFERENTIAL EQUATIONS Unfortunately, it’s impossible to solve most differential equations in the sense of obtaining an explicit.
Quantum random walks and quantum algorithms Andris Ambainis University of Latvia.
COMPSCI 102 Introduction to Discrete Mathematics.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
POSC 202A: Lecture 4 Probability. We begin with the basics of probability and then move on to expected value. Understanding probability is important because.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Welcome to MM570 Psychological Statistics
L56 – Discrete Random Variables, Distributions & Expected Values
COMPSCI 102 Discrete Mathematics for Computer Science.
Binomial Distributions Chapter 5.3 – Probability Distributions and Predictions Mathematics of Data Management (Nelson) MDM 4U.
Lecture 6 Dustin Lueker.  Standardized measure of variation ◦ Idea  A standard deviation of 10 may indicate great variability or small variability,
Great Theoretical Ideas in Computer Science for Some.
COMPSCI 230 Discrete Mathematics for Computer Science.
Great Theoretical Ideas In Computer Science John LaffertyCS Fall 2006 Lecture 10 Sept. 28, 2006Carnegie Mellon University Classics.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Binomial Distributions Chapter 5.3 – Probability Distributions and Predictions Mathematics of Data Management (Nelson) MDM 4U Authors: Gary Greer (with.
18.1 CompSci 102 Today’s topics Impaired wanderingImpaired wandering Instant InsanityInstant Insanity 1950s TV Dating1950s TV Dating Final Exam reviewFinal.
PROBABILITY DISTRIBUTIONS DISCRETE RANDOM VARIABLES OUTCOMES & EVENTS Mrs. Aldous & Mr. Thauvette IB DP SL Mathematics.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Great Theoretical Ideas in Computer Science.
Introduction to Discrete Mathematics
Probability III: The Probabilistic Method
Great Theoretical Ideas In Computer Science
Randomized Algorithms Markov Chains and Random Walks
Great Theoretical Ideas in Computer Science
Random Walks.
Probability Theory: Counting in Terms of Proportions
Great Theoretical Ideas in Computer Science
Random Walks.
Probability Theory: Counting in Terms of Proportions
Great Theoretical Ideas in Computer Science
Introduction to Discrete Mathematics
Presentation transcript:

COMPSCI 102 Introduction to Discrete Mathematics

Probability Refresher What’s a Random Variable? A Random Variable is a real-valued function on a sample space S E[X+Y] =E[X] + E[Y]

Probability Refresher What does this mean: E[X | A]? E[ X ] = E[ X | A ] Pr[ A ] + E[ X | A ] Pr[ A ] Pr[ A ] = Pr[ A | B ] Pr[ B ] + Pr[ A | B ] Pr[ B ] Is this true: Yes! Similarly:

Random Walks Lecture 12 (October 10, 2007)

How to walk home drunk

No new ideas Solve HW problem Eat Wait Work probability Hungry Abstraction of Student Life

Like finite automata, but instead of a determinisic or non-deterministic action, we have a probabilistic action Example questions: “What is the probability of reaching goal on string Work,Eat,Work?” No new ideas Solve HW problem Eat Wait Work Hungry

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

- Simpler: Random Walks on Graphs At any node, go to one of the neighbors of the node with equal probability

0n k Random Walk on a Line You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 1: what is your expected amount of money at time t? Let X t be a R.V. for the amount of $$$ at time t

0n k Random Walk on a Line You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n X t = k +  1 +   t, (  i is RV for change in your money at time i) So, E[X t ] = k E[  i ] = 0

0n k Random Walk on a Line You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 2: what is the probability that you leave with $n?

Random Walk on a Line Question 2: what is the probability that you leave with $n? E[X t ] = k E[X t ] = E[X t | X t = 0] × Pr(X t = 0) + E[X t | X t = n] × Pr(X t = n) + E[ X t | neither] × Pr(neither) As t  ∞, Pr(neither)  0, also something t < n Hence Pr(X t = n)  k/n k = n × Pr(X t = n) + (something t ) × Pr(neither)

0n k Another Way To Look At It You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 2: what is the probability that you leave with $n? = probability that I hit green before I hit red

- What is chance I reach green before red? Random Walks and Electrical Networks Same as voltage if edges are identical resistors and we put 1-volt battery between green and red

- Random Walks and Electrical Networks Same as equations for voltage if edges all have same resistance! p x = Pr(reach green first starting from x) p green = 1, p red = 0 And for the rest p x = Average y  Nbr(x) (p y )

0n k Another Way To Look At It You go into a casino with $k, and at each time step, you bet $1 on a fair game You leave when you are broke or have $n Question 2: what is the probability that you leave with $n? voltage(k) = k/n = Pr[ hitting n before 0 starting at k] !!!

Getting Back Home - Lost in a city, you want to get back to your hotel How should you do this? Requires a good memory and a piece of chalk Depth First Search!

Getting Back Home - How about walking randomly?

Will this work? When will I get home? Is Pr[ reach home ] = 1? What is E[ time to reach home ]?

Pr[ will reach home ] = 1

We Will Eventually Get Home Look at the first n steps There is a non-zero chance p 1 that we get home Also, p 1 ≥ (1/n) n Suppose we fail Then, wherever we are, there is a chance p 2 ≥ (1/n) n that we hit home in the next n steps from there Probability of failing to reach home by time kn = (1 – p 1 )(1 – p 2 ) … (1 – p k )  0 as k  ∞

E[ time to reach home ] is at most this Furthermore: If the graph has n nodes and m edges, then E[ time to visit all nodes ] ≤ 2m × (n-1)

Cover Times Cover time (from u) C u = E [ time to visit all vertices | start at u ] Cover time of the graph (worst case expected time to see all vertices) C(G) = max u { C u }

Cover Time Theorem If the graph G has n nodes and m edges, then the cover time of G is C(G) ≤ 2m (n – 1) Any graph on n vertices has < n 2 /2 edges Hence C(G) < n 3 for all graphs G

Actually, we get home pretty fast… Chance that we don’t hit home by (2k)2m(n-1) steps is (½) k

A Simple Calculation If the average income of people is $100 then more than 50% of the people can be earning more than $200 each False! else the average would be higher!!! True of False:

Andrei A. Markov Markov’s Inequality If X is a non-negative r.v. with mean E[X], then Pr[ X > 2 E[X] ] ≤ ½ Pr[ X > k E[X] ] ≤ 1/k

(since X is non-neg) Markov’s Inequality Non-neg random variable X has expectation A = E[X] A = E[X] = E[X | X > 2A ] Pr[X > 2A] + E[X | X ≤ 2A ] Pr[X ≤ 2A] ≥ E[X | X > 2A ] Pr[X > 2A] Also, E[X | X > 2A] > 2A  A ≥ 2A × Pr[X > 2A] Pr[ X > k × expectation ] ≤ 1/k  ½ ≥ Pr[X > 2A]

Actually, we get home pretty fast… Chance that we don’t hit home by (2k)2m(n-1) steps is (½) k

An Averaging Argument Suppose I start at u E[ time to hit all vertices | start at u ] ≤ C(G) Hence, by Markov’s Inequality: Pr[ time to hit all vertices > 2C(G) | start at u ] ≤ ½

So Let’s Walk Some Mo! Pr [ time to hit all vertices > 2C(G) | start at u ] ≤ ½ Suppose at time 2C(G), I’m at some node with more nodes still to visit Pr [ haven’t hit all vertices in 2C(G) more time | start at v ] ≤ ½ Chance that you failed both times ≤ ¼ = (½) 2 Hence, Pr[ havent hit everyone in time k × 2C(G) ] ≤ (½) k

Hence, if we know that Expected Cover Time C(G) < 2m(n-1) then Pr[ home by time 4k m(n-1) ] ≥ 1 – (½) k

Random walks on infinite graphs

‘To steal a joke from Kakutani (U.C.L.A. colloquium talk): “A drunk man will eventually find his way home but a drunk bird may get lost forever.”’ Shizuo Kakutani R. Durrett, Probability: Theory and Examples, Fourth edition, Cambridge University Press, New York, NY, 2010, p. 191.

0i Random Walk On a Line Flip an unbiased coin and go left/right Let X t be the position at time t Pr[ X t = i ]= Pr[ #heads – #tails = i] = Pr[ #heads – (t - #heads) = i] t (t+i)/2 /2 t =

Sterling’s approx Random Walk On a Line 0i Pr[ X 2t = 0 ] = ≤ Θ (1/√t) 2t t /2 2t Y 2t = indicator for (X 2t = 0)  E[ Y 2t ] = Θ (1/√t) Z 2n = number of visits to origin in 2n steps E[ Z 2n ] = E[  t = 1…n Y 2t ] ≤ Θ (1/√1 + 1/√2 +…+ 1/√n)= Θ (√n)

In n steps, you expect to return to the origin Θ (√n) times!

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

How About a 2-d Grid? Let us simplify our 2-d random walk: move in both the x-direction and y-direction…

In The 2-d Walk Returning to the origin in the grid  both “line” random walks return to their origins Pr[ visit origin at time t ] = Θ (1/√t) × Θ (1/√t) = Θ (1/t) E[ # of visits to origin by time n ] = Θ (1/1 + 1/2 + 1/3 + … + 1/n ) = Θ (log n)

But In 3D Pr[ visit origin at time t ] = Θ (1/√t) 3 = Θ (1/t 3/2 ) lim n  ∞ E[ # of visits by time n ] < K (constant) Hence Pr[ never return to origin ] > 1/K E[visits in total ] = Pr[ return to origin ]*(1+ E[visits in total ]) + 0*Pr[ never return to origin ]

Here’s What You Need to Know… Random Walk in a Line Cover Time of a Graph Markov’s Inequality