Presentation is loading. Please wait.

Presentation is loading. Please wait.

17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks.

Similar presentations


Presentation on theme: "17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks."— Presentation transcript:

1 17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks

2 17.2 CompSci 102 Instant Insanity Given four cubes, how can we stack the cubes, so that whether the cubes are viewed from the front, back, left or right, one sees all four colorsGiven four cubes, how can we stack the cubes, so that whether the cubes are viewed from the front, back, left or right, one sees all four colors

3 17.3 CompSci 102 Cubes 1 & 2

4 17.4 CompSci 102 Cubes 3 & 4

5 17.5 CompSci 102 Creating graph formulation Create multigraphCreate multigraph –Four vertices represent four colors of faces –Connect vertices with edges when they are opposite from each other –Label cubes SolvingSolving –Find subgraphs with certain properties –What properties?

6 17.6 CompSci 102 Summary Graph-theoretic Formulation of Instant Insanity: Find two edge-disjoint labeled factors in the graph of the Instant Insanity puzzle, one for left-right sides and one for front-back sides Use the clockwise traversal procedure to determine the left-right and front-back arrangements of each cube.

7 17.7 CompSci 102 Try it out: Find all Instant Insanity solution to the game with the multigraph: 1 2 3 4 1 1 2 2 3 3 4 4

8 17.8 CompSci 102 Answer: 1 2 3 4 1 2 3 4 and

9 17.9 CompSci 102 Getting back home Lost in a city, you want to get back to your hotel.Lost in a city, you want to get back to your hotel. How should you do this?How should you do this? –Depth-first search? What resources does this algorithm require?What resources does this algorithm require? Why not breadth first?Why not breadth first? –Walk randomly What does this algorithm require?What does this algorithm require? -

10 17.10 CompSci 102 Random Walking Will it work?Will it work? – –Pr[ will reach home ] = ? When will I get home?When will I get home? –Given that there are n nodes and m edges – –E[ time to visit all nodes ] ≤ 2m × (n-1)

11 17.11 CompSci 102 Cover times Let us define a couple of useful things:Let us define a couple of useful things: –Cover time (from u) –C u = E [ time to visit all vertices | start at u ] Cover time of the graph:Cover time of the graph: –C(G) = max u { C u } Cover time theoremCover time theorem –C(G) –C(G) ≤ 2m (n – 1) – –What is the max value of C(G) in terms of n?

12 17.12 CompSci 102 We will eventually get home Look at the first n steps.Look at the first n steps. –There is a non-zero chance p 1 that we get home. Suppose we fail.Suppose we fail. –Then, wherever we are, there a chance p 2 > 0 that we hit home in the next n steps from there. Probability of failing to reach home by time knProbability of failing to reach home by time kn –= (1 – p 1 )(1- p 2 ) … (1 – p k )  0 as k  ∞

13 17.13 CompSci 102 In fact Pr[ we don’t get home by 2k C(G) steps ] ≤ (½) k Recall: C(G) = cover time of G ≤ 2m(n-1)

14 17.14 CompSci 102 An averaging argument Suppose I start at u.Suppose I start at u. –E[ time to hit all vertices | start at u ] ≤ C(G) Hence,Hence, –Pr[ time to hit all vertices > 2C(G) | start at u ] ≤ ½. Why?Why? Else this average would be higher. (called Markov ’ s inequality.)

15 17.15 CompSci 102 Markov’s Inequality Random variable X has expectation A = E[X].Random variable X has expectation A = E[X]. A = E[X] = E[X | X > 2 A ] Pr[X > 2 A ]A = E[X] = E[X | X > 2 A ] Pr[X > 2 A ] + E[X | X ≤ 2 A ] Pr[X ≤ 2 A ]+ E[X | X ≤ 2 A ] Pr[X ≤ 2 A ] ≥ E[X | X > 2 A ] Pr[X > 2 A ] ≥ E[X | X > 2 A ] Pr[X > 2 A ] Also, E[X | X > 2A]> 2AAlso, E[X | X > 2A]> 2A  A ≥ 2A × Pr[X > 2 A ]  ½ ≥ Pr[X > 2 A ]  A ≥ 2A × Pr[X > 2 A ]  ½ ≥ Pr[X > 2 A ] Pr[ X exceeds k × expectation ] ≤ 1/k.

16 17.16 CompSci 102 An averaging argument Suppose I start at u.Suppose I start at u. –E[ time to hit all vertices | start at u ] ≤ C(G) Hence, by Markov ’ s InequalityHence, by Markov ’ s Inequality – Pr[ time to hit all vertices > 2C(G) | start at u ] ≤ ½ Suppose at time 2C(G), at some node v, with more nodes still to visit.Suppose at time 2C(G), at some node v, with more nodes still to visit. –Pr [ haven ’ t hit all vertices in 2C(G) more time | start at v ] ≤ ½. Chance that you failed both times ≤ ¼ !Chance that you failed both times ≤ ¼ !

17 17.17 CompSci 102 The power of independence It is like flipping a coin with tails probability q ≤ ½.It is like flipping a coin with tails probability q ≤ ½. The probability that you get k tails is q k ≤ ( ½ ) k.The probability that you get k tails is q k ≤ ( ½ ) k. (because the trials are independent!) Hence,Hence, –Pr[ havent hit everyone in time k × 2C(G) ] ≤ ( ½ ) k Exponential in k!Exponential in k!

18 17.18 CompSci 102 Hence, if we know that Expected Cover Time C(G) < 2m(n-1) then Pr[ home by time 4km(n-1) ] ≥ 1 – (½) k

19 17.19 CompSci 102 Random walks on infinite graphs A drunk man will find his way home, but a drunk bird may get lost foreverA drunk man will find his way home, but a drunk bird may get lost forever - Shizuo Kakutani- Shizuo Kakutani

20 17.20 CompSci 102 Random Walk on a line Flip an unbiased coin and go left/right.Flip an unbiased coin and go left/right. Let X t be the position at time tLet X t be the position at time t Pr[ X t = i ]Pr[ X t = i ] = Pr[ #heads - #tails = i] = Pr[ #heads - #tails = i] = Pr[ #heads – (t - #heads) = i] = = Pr[ #heads – (t - #heads) = i] = 0 i

21 17.21 CompSci 102 Unbiased Random Walk Pr[ X 2t = 0 ] = / 2 2t ≤ Θ (1/√t)Pr[ X 2t = 0 ] = / 2 2t ≤ Θ (1/√t) Y 2t = indicator for (X 2t = 0)  E[ Y 2t ] = Θ (1/√t)Y 2t = indicator for (X 2t = 0)  E[ Y 2t ] = Θ (1/√t) Z 2n = number of visits to origin in 2n steps.Z 2n = number of visits to origin in 2n steps.  E[ Z 2n ] = E[  t = 1…n Y 2t ]  E[ Z 2n ] = E[  t = 1…n Y 2t ] = Θ (1/√1 + 1/√2 +…+ 1/√n) = Θ (√n)= Θ (1/√1 + 1/√2 +…+ 1/√n) = Θ (√n) 0 Sterling’s approx.  2t   t 

22 17.22 CompSci 102 In n steps, you expect to return to the origin Θ (√n) times!

23 17.23 CompSci 102 Simple Claim Recall: if we repeatedly flip coin with bias pRecall: if we repeatedly flip coin with bias p –E[ # of flips till heads ] = 1/p. Claim: If Pr[ not return to origin ] = p, thenClaim: If Pr[ not return to origin ] = p, then –E[ number of times at origin ] = 1/p. Proof: H = never return to origin. T = we do.Proof: H = never return to origin. T = we do. –Hence returning to origin is like getting a tails. –E[ # of returns ] = E[ # tails before a head] = 1/p – 1. (But we started at the origin too!)(But we started at the origin too!)

24 17.24 CompSci 102 We will return… Claim: If Pr[ not return to origin ] = p, thenClaim: If Pr[ not return to origin ] = p, then E[ number of times at origin ] = 1/p.E[ number of times at origin ] = 1/p. Theorem: Pr[ we return to origin ] = 1.Theorem: Pr[ we return to origin ] = 1. Proof: Suppose not.Proof: Suppose not. Hence p = Pr[ never return ] > 0.Hence p = Pr[ never return ] > 0.  E [ #times at origin ] = 1/p = constant.  E [ #times at origin ] = 1/p = constant. But we showed that E[ Z n ] = Θ (√n)  ∞But we showed that E[ Z n ] = Θ (√n)  ∞

25 17.25 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

26 17.26 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

27 17.27 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

28 17.28 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

29 17.29 CompSci 102 How about a 2-d grid? Let us simplify our 2-d random walk:Let us simplify our 2-d random walk: move in both the x-direction and y-direction… move in both the x-direction and y-direction…

30 17.30 CompSci 102 in the 2-d walk Returning to the origin in the gridReturning to the origin in the grid  both “line” random walks return to their origins  both “line” random walks return to their origins Pr[ visit origin at time t ] = Θ (1/√t) × Θ (1/√t)Pr[ visit origin at time t ] = Θ (1/√t) × Θ (1/√t) = Θ (1/t) = Θ (1/t) E[ # of visits to origin by time n ]E[ # of visits to origin by time n ] = Θ (1/1 + 1/2 + 1/3 + … + 1/n ) = Θ (log n)= Θ (1/1 + 1/2 + 1/3 + … + 1/n ) = Θ (log n)

31 17.31 CompSci 102 We will return (again!)… Claim: If Pr[ not return to origin ] = p, thenClaim: If Pr[ not return to origin ] = p, then E[ number of times at origin ] = 1/p.E[ number of times at origin ] = 1/p. Theorem: Pr[ we return to origin ] = 1.Theorem: Pr[ we return to origin ] = 1. Proof: Suppose not.Proof: Suppose not. Hence p = Pr[ never return ] > 0.Hence p = Pr[ never return ] > 0.  E [ #times at origin ] = 1/p = constant.  E [ #times at origin ] = 1/p = constant. But we showed that E[ Z n ] = Θ (log n)  ∞But we showed that E[ Z n ] = Θ (log n)  ∞

32 17.32 CompSci 102 But in 3-d Pr[ visit origin at time t ] = Θ (1/√t) 3 = Θ (1/t 3/2 )Pr[ visit origin at time t ] = Θ (1/√t) 3 = Θ (1/t 3/2 ) lim n  ∞ E[ # of visits by time n ] < K (constant)lim n  ∞ E[ # of visits by time n ] < K (constant) HenceHence Pr[ never return to origin ] > 1/K. Pr[ never return to origin ] > 1/K.


Download ppt "17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks."

Similar presentations


Ads by Google