17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

2 4 Theorem:Proof: What shall we do for an undirected graph?
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Great Theoretical Ideas in Computer Science about AWESOME Some Generating Functions Probability.
Great Theoretical Ideas in Computer Science for Some.
Great Theoretical Ideas in Computer Science.
COMPSCI 102 Introduction to Discrete Mathematics.
Random Walks. Random Walks on Graphs - At any node, go to one of the neighbors of the node with equal probability. -
October 26, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL11.1 Introduction to Algorithms 6.046J/18.401J LECTURE12 Skip Lists Data.
+ Graph Theory to the Rescue Using graph theory to solve games and problems Dr. Carrie Wright University of Arizona Teacher’s Circle November 17, 2011.
Entropy Rates of a Stochastic Process
2 4 Theorem:Proof: What shall we do for an undirected graph?
The Instant Insanity Game
© 2006 Pearson Addison-Wesley. All rights reserved14 A-1 Chapter 14 Graphs.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
Karger’s Min-Cut Algorithm Amihood Amir Bar-Ilan University, 2009.
Randomized Algorithms and Randomized Rounding Lecture 21: April 13 G n 2 leaves
Prof. Bart Selman Module Probability --- Part d)
Random Walks Great Theoretical Ideas In Computer Science Anupam GuptaCS Fall 2005 Lecture 23Nov 14, 2005Carnegie Mellon University.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 24April 8, 2004Carnegie Mellon University.
MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.
Lecture 20: April 12 Introduction to Randomized Algorithms and the Probabilistic Method.
INSTANT INSANITY PUZZLE By Patrick K. Asaba. INSTANT INSANITY PUZZLE The Instant Insanity puzzle is played with four cubes. Each face of the cube is colored.
Discrete Random Variables: The Binomial Distribution
Entropy Rate of a Markov Chain
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Graph Theory Topics to be covered:
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2005 Lecture 24April 7, 2005Carnegie Mellon University.
Probability III: The probabilistic method & infinite probability spaces Great Theoretical Ideas In Computer Science Steven Rudich, Avrim BlumCS
1 Spanning Trees Longin Jan Latecki Temple University based on slides by David Matuszek, UPenn, Rose Hoberman, CMU, Bing Liu, U. of Illinois, Boting Yang,
Many random walks are faster than one Noga AlonTel Aviv University Chen AvinBen Gurion University Michal KouckyCzech Academy of Sciences Gady KozmaWeizmann.
17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity TreesTrees –Properties –Applications Reading: Sections Reading: Sections
Bipartite Matching. Unweighted Bipartite Matching.
Chapter solving exponential and logarithmic functions.
Teacher's Notes Topic: Probability L7 Coins A teacher has some coins in his pocket. He is going to take one of the coins at random. He says: List all the.
Binomial Distributions Chapter 5.3 – Probability Distributions and Predictions Mathematics of Data Management (Nelson) MDM 4U.
Independent and Dependent Events. Independent Events Two events are independent if the outcome of one event does not affect the outcome of a second event.
COMPSCI 230 Discrete Mathematics for Computer Science.
90-723: Data Structures and Algorithms for Information Processing Copyright © 1999, Carnegie Mellon. All Rights Reserved. 1 Lecture 7: Graphs Data Structures.
Binomial Distributions Chapter 5.3 – Probability Distributions and Predictions Mathematics of Data Management (Nelson) MDM 4U Authors: Gary Greer (with.
18.1 CompSci 102 Today’s topics Impaired wanderingImpaired wandering Instant InsanityInstant Insanity 1950s TV Dating1950s TV Dating Final Exam reviewFinal.
Chance We will base on the frequency theory to study chances (or probability).
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Andrey Markov, public domain image
Introduction to Randomized Algorithms and the Probabilistic Method
Multi-Party Proofs and Computation
Spanning Trees Longin Jan Latecki Temple University based on slides by
INSTANT INSANITY PUZZLE
Great Theoretical Ideas In Computer Science
Haim Kaplan and Uri Zwick
Graphs Chapter 13.
Great Theoretical Ideas in Computer Science
Random Walks.
Depth-First Search Graph Traversals Depth-First Search DFS.
Great Theoretical Ideas in Computer Science
Random Walks.
Euler and Hamilton Paths
Spanning Trees Longin Jan Latecki Temple University based on slides by
Great Theoretical Ideas in Computer Science
Clustering.
Introduction to Discrete Mathematics
Presentation transcript:

17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks

17.2 CompSci 102 Instant Insanity Given four cubes, how can we stack the cubes, so that whether the cubes are viewed from the front, back, left or right, one sees all four colorsGiven four cubes, how can we stack the cubes, so that whether the cubes are viewed from the front, back, left or right, one sees all four colors

17.3 CompSci 102 Cubes 1 & 2

17.4 CompSci 102 Cubes 3 & 4

17.5 CompSci 102 Creating graph formulation Create multigraphCreate multigraph –Four vertices represent four colors of faces –Connect vertices with edges when they are opposite from each other –Label cubes SolvingSolving –Find subgraphs with certain properties –What properties?

17.6 CompSci 102 Summary Graph-theoretic Formulation of Instant Insanity: Find two edge-disjoint labeled factors in the graph of the Instant Insanity puzzle, one for left-right sides and one for front-back sides Use the clockwise traversal procedure to determine the left-right and front-back arrangements of each cube.

17.7 CompSci 102 Try it out: Find all Instant Insanity solution to the game with the multigraph:

17.8 CompSci 102 Answer: and

17.9 CompSci 102 Getting back home Lost in a city, you want to get back to your hotel.Lost in a city, you want to get back to your hotel. How should you do this?How should you do this? –Depth-first search? What resources does this algorithm require?What resources does this algorithm require? Why not breadth first?Why not breadth first? –Walk randomly What does this algorithm require?What does this algorithm require? -

17.10 CompSci 102 Random Walking Will it work?Will it work? – –Pr[ will reach home ] = ? When will I get home?When will I get home? –Given that there are n nodes and m edges – –E[ time to visit all nodes ] ≤ 2m × (n-1)

17.11 CompSci 102 Cover times Let us define a couple of useful things:Let us define a couple of useful things: –Cover time (from u) –C u = E [ time to visit all vertices | start at u ] Cover time of the graph:Cover time of the graph: –C(G) = max u { C u } Cover time theoremCover time theorem –C(G) –C(G) ≤ 2m (n – 1) – –What is the max value of C(G) in terms of n?

17.12 CompSci 102 We will eventually get home Look at the first n steps.Look at the first n steps. –There is a non-zero chance p 1 that we get home. Suppose we fail.Suppose we fail. –Then, wherever we are, there a chance p 2 > 0 that we hit home in the next n steps from there. Probability of failing to reach home by time knProbability of failing to reach home by time kn –= (1 – p 1 )(1- p 2 ) … (1 – p k )  0 as k  ∞

17.13 CompSci 102 In fact Pr[ we don’t get home by 2k C(G) steps ] ≤ (½) k Recall: C(G) = cover time of G ≤ 2m(n-1)

17.14 CompSci 102 An averaging argument Suppose I start at u.Suppose I start at u. –E[ time to hit all vertices | start at u ] ≤ C(G) Hence,Hence, –Pr[ time to hit all vertices > 2C(G) | start at u ] ≤ ½. Why?Why? Else this average would be higher. (called Markov ’ s inequality.)

17.15 CompSci 102 Markov’s Inequality Random variable X has expectation A = E[X].Random variable X has expectation A = E[X]. A = E[X] = E[X | X > 2 A ] Pr[X > 2 A ]A = E[X] = E[X | X > 2 A ] Pr[X > 2 A ] + E[X | X ≤ 2 A ] Pr[X ≤ 2 A ]+ E[X | X ≤ 2 A ] Pr[X ≤ 2 A ] ≥ E[X | X > 2 A ] Pr[X > 2 A ] ≥ E[X | X > 2 A ] Pr[X > 2 A ] Also, E[X | X > 2A]> 2AAlso, E[X | X > 2A]> 2A  A ≥ 2A × Pr[X > 2 A ]  ½ ≥ Pr[X > 2 A ]  A ≥ 2A × Pr[X > 2 A ]  ½ ≥ Pr[X > 2 A ] Pr[ X exceeds k × expectation ] ≤ 1/k.

17.16 CompSci 102 An averaging argument Suppose I start at u.Suppose I start at u. –E[ time to hit all vertices | start at u ] ≤ C(G) Hence, by Markov ’ s InequalityHence, by Markov ’ s Inequality – Pr[ time to hit all vertices > 2C(G) | start at u ] ≤ ½ Suppose at time 2C(G), at some node v, with more nodes still to visit.Suppose at time 2C(G), at some node v, with more nodes still to visit. –Pr [ haven ’ t hit all vertices in 2C(G) more time | start at v ] ≤ ½. Chance that you failed both times ≤ ¼ !Chance that you failed both times ≤ ¼ !

17.17 CompSci 102 The power of independence It is like flipping a coin with tails probability q ≤ ½.It is like flipping a coin with tails probability q ≤ ½. The probability that you get k tails is q k ≤ ( ½ ) k.The probability that you get k tails is q k ≤ ( ½ ) k. (because the trials are independent!) Hence,Hence, –Pr[ havent hit everyone in time k × 2C(G) ] ≤ ( ½ ) k Exponential in k!Exponential in k!

17.18 CompSci 102 Hence, if we know that Expected Cover Time C(G) < 2m(n-1) then Pr[ home by time 4km(n-1) ] ≥ 1 – (½) k

17.19 CompSci 102 Random walks on infinite graphs A drunk man will find his way home, but a drunk bird may get lost foreverA drunk man will find his way home, but a drunk bird may get lost forever - Shizuo Kakutani- Shizuo Kakutani

17.20 CompSci 102 Random Walk on a line Flip an unbiased coin and go left/right.Flip an unbiased coin and go left/right. Let X t be the position at time tLet X t be the position at time t Pr[ X t = i ]Pr[ X t = i ] = Pr[ #heads - #tails = i] = Pr[ #heads - #tails = i] = Pr[ #heads – (t - #heads) = i] = = Pr[ #heads – (t - #heads) = i] = 0 i

17.21 CompSci 102 Unbiased Random Walk Pr[ X 2t = 0 ] = / 2 2t ≤ Θ (1/√t)Pr[ X 2t = 0 ] = / 2 2t ≤ Θ (1/√t) Y 2t = indicator for (X 2t = 0)  E[ Y 2t ] = Θ (1/√t)Y 2t = indicator for (X 2t = 0)  E[ Y 2t ] = Θ (1/√t) Z 2n = number of visits to origin in 2n steps.Z 2n = number of visits to origin in 2n steps.  E[ Z 2n ] = E[  t = 1…n Y 2t ]  E[ Z 2n ] = E[  t = 1…n Y 2t ] = Θ (1/√1 + 1/√2 +…+ 1/√n) = Θ (√n)= Θ (1/√1 + 1/√2 +…+ 1/√n) = Θ (√n) 0 Sterling’s approx.  2t   t 

17.22 CompSci 102 In n steps, you expect to return to the origin Θ (√n) times!

17.23 CompSci 102 Simple Claim Recall: if we repeatedly flip coin with bias pRecall: if we repeatedly flip coin with bias p –E[ # of flips till heads ] = 1/p. Claim: If Pr[ not return to origin ] = p, thenClaim: If Pr[ not return to origin ] = p, then –E[ number of times at origin ] = 1/p. Proof: H = never return to origin. T = we do.Proof: H = never return to origin. T = we do. –Hence returning to origin is like getting a tails. –E[ # of returns ] = E[ # tails before a head] = 1/p – 1. (But we started at the origin too!)(But we started at the origin too!)

17.24 CompSci 102 We will return… Claim: If Pr[ not return to origin ] = p, thenClaim: If Pr[ not return to origin ] = p, then E[ number of times at origin ] = 1/p.E[ number of times at origin ] = 1/p. Theorem: Pr[ we return to origin ] = 1.Theorem: Pr[ we return to origin ] = 1. Proof: Suppose not.Proof: Suppose not. Hence p = Pr[ never return ] > 0.Hence p = Pr[ never return ] > 0.  E [ #times at origin ] = 1/p = constant.  E [ #times at origin ] = 1/p = constant. But we showed that E[ Z n ] = Θ (√n)  ∞But we showed that E[ Z n ] = Θ (√n)  ∞

17.25 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

17.26 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

17.27 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

17.28 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

17.29 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

17.30 CompSci 102 in the 2-d walk Returning to the origin in the gridReturning to the origin in the grid  both “line” random walks return to their origins  both “line” random walks return to their origins Pr[ visit origin at time t ] = Θ (1/√t) × Θ (1/√t)Pr[ visit origin at time t ] = Θ (1/√t) × Θ (1/√t) = Θ (1/t) = Θ (1/t) E[ # of visits to origin by time n ]E[ # of visits to origin by time n ] = Θ (1/1 + 1/2 + 1/3 + … + 1/n ) = Θ (log n)= Θ (1/1 + 1/2 + 1/3 + … + 1/n ) = Θ (log n)

17.31 CompSci 102 We will return (again!)… Claim: If Pr[ not return to origin ] = p, thenClaim: If Pr[ not return to origin ] = p, then E[ number of times at origin ] = 1/p.E[ number of times at origin ] = 1/p. Theorem: Pr[ we return to origin ] = 1.Theorem: Pr[ we return to origin ] = 1. Proof: Suppose not.Proof: Suppose not. Hence p = Pr[ never return ] > 0.Hence p = Pr[ never return ] > 0.  E [ #times at origin ] = 1/p = constant.  E [ #times at origin ] = 1/p = constant. But we showed that E[ Z n ] = Θ (log n)  ∞But we showed that E[ Z n ] = Θ (log n)  ∞

17.32 CompSci 102 But in 3-d Pr[ visit origin at time t ] = Θ (1/√t) 3 = Θ (1/t 3/2 )Pr[ visit origin at time t ] = Θ (1/√t) 3 = Θ (1/t 3/2 ) lim n  ∞ E[ # of visits by time n ] < K (constant)lim n  ∞ E[ # of visits by time n ] < K (constant) HenceHence Pr[ never return to origin ] > 1/K. Pr[ never return to origin ] > 1/K.