Presentation is loading. Please wait.

Presentation is loading. Please wait.

Random walks on undirected graphs and a little bit about Markov Chains Guy.

Similar presentations


Presentation on theme: "Random walks on undirected graphs and a little bit about Markov Chains Guy."— Presentation transcript:

1 Random walks on undirected graphs and a little bit about Markov Chains Guy

2 One dimensional Random walk  A random walk on a line.  A line is 012345 n If the walk is on 0 it goes into 1. Else it goes to i+1 or to i-1 with probability 1/2 What is the expected number of steps to go to n?

3 The expected time function  T(n)=0  T(i)=1+(T(i+1)+T(i-1))/2, i  0  T(0)=1+T(1)  Add all equation gives T(n-1)=2n-1.  From that we get: T(n-2)=4n-4  T(i)=2(n-i)n-(n-i) 2  T(0)=n 2

4 2-SAT: definition  A 2-CNF formula:  (x+ ¬y)& (¬ x+ ¬ z) &(x+y) &(z+¬ x)  (x+y) and the others are called CLAUSES  CNF means that we want to satisfy all of them  (x+ ¬y) is satisfied if X=T or y=F.  The question: is there a satisfying assignment? 2 n is not a number the computer can run even for n=70

5 Remark: the case of two literals very special  If three literals per clause: NPC  Not only that, but can not be approximated better than 7/8  Obviously for (x+ ¬y+¬z) if we draw a random value the probability that the clause is satisfied is 1-1/8=7/8. Best approximation possible  Also hard 3-SAT(5)

6 The random algorithm for 2-SAT  2-SAT : Start with an arbitrary assignment  Let C be a non satisfied clause. Choose at random one of the two literals of C and flip its value.  We know that if the variables are x 1 and x 2 the optimum disagrees with us on x 1 or on x 2.  Distance to OPT: with probability ½ smaller by 1 and with probability ½ larger by 1 (worst case). Thus E(RW)≤n 2

7 RP algorithm (can make a mistake)  If you do these changes 2n 2 times the probability that we do not get a truth assigment is ½ if one exists  You can do n 4 and the probability that if there is a truth assignment we don’t find it is 1/n 2  What we use here is called the Markov inequality that states there are at most 1/3 of the numbers in a collection of numbers that are at least 3 times the average  There are several deterministic algorithms for 2-SAT 2-SAT

8 Shuffling cards  Take the top card and place it with a random place (including first place). One of n.  A simple claim: if a card is in a random place it will be in a random place after next move.  We know Pr(card is in place i)=1/n  For the card to be in i after next step three possibilities

9 Probability of the card to be in place i  One possibility: it is not the first card and it is in place i. Chooses one of i-1 first places.  Second possibility (disjoint events) its at place 1 and exchanged with i.  Third possibility: it is in place i+1 and the first card is placed in one of the places after i+1 i+1  1/n*(i-1)/n+1/n*1/n+1/n*1/(n-i)=1/n

10 Stopping time  If all the cards have been upstairs its random  Check the lowes card. To go up by 1 G(1/n) thus expectation n.  To go from second last to third last G(2/n) and expectation n/2.  This gives n+n/2+n/3+….= n(ln n+Ө(1))  FAST

11 Random Walks on undirected graphs  Given a graph choose a neigbor at random with probability 1/d(v)

12 Random Walks  Given a graph choose a vertex at random.

13 Random Walks  Given a graph choose a vertex at random.

14 Random Walks  Given a graph choose a vertex at random.

15 Random Walks  Given a graph choose a vertex at random.

16 Random Walks  Given a graph choose a vertex at random.

17 Random Walks  Given a graph choose a vertex at random.

18 Markov chains  Generalization of random walks on undirected graph.  The graph is directed  The sum over the values of outgoing edges is 1 but it does not need to be uniform.  We have a matrix P=(p ij ) with p ij is the probability that on state i it will go to j.  Say that for Π 0 =(x 1,x 2,….,x n )

19 Changing from state to state  Probability(state=i)=  j x j * p ji  This is the inner product of the current state and column j of the matrix.  Therefore if we are in distribution Π 0 after one step the distribution is at state: after one step the distribution is at state: Π 1 = Π 0 *P Π 1 = Π 0 *P  And after i stages its in Π i = Π 0 *P i  What is a steady state?

20 Steady state  Steady state is Π so that: Π*P= Π. For any i and any round the probability that it is in i is the same always. Π*P= Π. For any i and any round the probability that it is in i is the same always.  Conditions for convergence: The graph has to be strongly connected. The graph has to be strongly connected. Otherwise there may be many components Otherwise there may be many components that have no edges out. No steady state. that have no edges out. No steady state.  h ii the time to get back to i from i is finite  Non periodic. Slightly complex for Markov chains. For random walks on undirected graphs: not a bipartite graph.

21 The bipartite graph example  If the graph is bipartite and V 1 and V 2 are its sets then if we start with V 1 we can not be on V 1 vertex in after odd number of transitions.  Therefore a steady state is not possible.  So we need for random walks on graphs that the graph is connected and not bipartite. The other property will follow.

22 Fundamental theorem of Markov chains  Theorem: Given an aperiodic MC so that h ii is not infinite for any i, and non-reducible Markov chain than: 1) There is a unique steady state. Thus to find the steady state just find Π*P= Π 1) There is a unique steady state. Thus to find the steady state just find Π*P= Π 2) h ii =1/Π i. Geometric distribution argument. 2) h ii =1/Π i. Geometric distribution argument. Remark: the mixing time is how fast the chain gets to (very close to) the steady state.

23 Because its unique you just have to find the correct Π  For random walks in an undirected graph we claim that the steady state is: (2d 1 /m,2d 2 /m,……,2d m /m) (2d 1 /m,2d 2 /m,……,2d m /m)  It is trivial to show that this is the steady state. Multiply this vector and the i column. The only important ones are neighbors of i. The only important ones are neighbors of i. So sum (j,i)  E 1/d j *(2d j /m)=2d i /m So sum (j,i)  E 1/d j *(2d j /m)=2d i /m

24 The expected time to visit all the vertices of a graph  A matrix is doubly stochastic if and only if all columns also sum to 1.  Exercise (very simple): doubly stochastic then {1/n} or uniform is the steady state.  Define a new Markov chain of edges with directions which means 2m states.  The walk is defined naturally.  Exercise: show that the Matrix is doubly stochastic

25 The time we spend on every edge  By the above, we spend the same time on every edge in the two directions over all edges (of course you have to curve the noise. We are talking on a limit here).  Now, say that I want to bound h ij the expected time we get from i to j.

26 Showing h ij +h ji ≤ 2m  Assume that we start at i--->j  By the Markov chain of the edges it will take 2m steps until we do this move again  Forget about how you got to j (no memory).  Since we are doing now i---->j again we know that: a) As it was in j, it returned to i. a) As it was in j, it returned to i. This is half the inequality h ji This is half the inequality h ji b) Now it goes i----> j. This takes at most h ij b) Now it goes i----> j. This takes at most h ij c) Since this takes at most 2m the claim follows. c) Since this takes at most 2m the claim follows.

27 Consider a spanning tree and a walk on the spanning tree  Choose any paths that traverses the tree so that the walk goes from a parent to a child once and back once.  Per parent child we have h ij +h ji ≤2m  Thus over the n-1 edges of the graph the cover time is at most 2m(n-1)<n 3

28 Tight example: the n/2 vertices clique u1 u2 u n/2

29 Bridges  For bridges h ij +h ji =2m (follows from proof).  Say you are the intersection vertex u 1 of the clique and of the path. Its harder to go right than left.  Thus it takes about n 2 time to go from u 1 to u 2 and the same time to go from u 2 to u 3 etc.  This gives Ω(n 3 )  Funny name: Lollipop graph.

30 Exponential cover time for directed graphs

31 Spectrum of a graph  We can represent an n*n graph with a symmetric matrix A. Let the vertices be {1,2,…,n} {1,2,…,n}  Put A ij =1 if and only if (I,j)  E  Note that the matrix is symmetric  An eigenvalue is a so that A*v= v for some v  Since the matrix is symmetric all eigenvalues are real numbers.

32 Relation of graphs and algebra  If we have a d-regular graph and we count all walks of distance exactly k, we get n*d k  One average there is a pair u,v so that walks(u,v)≥d k /n walks(u,v)≥d k /n  What happens if the average degree is d?  We use the symmetric matrix A that represents the graph G. And the vertices {1,2….,n}.

33 The inequality still holds, two slides proof  Say 0 ≥ 1 ≥ ………….. ≥ n-1  0 = max |x|=1 {x T * A *X}  Choose x i =1/n 1/2. Its easy to see that we get  A ij /n=d  A ij /n=d  Thus : 0 ≥  A ij /n=d.  Known: eigenvalues of A k are ( i ) k  The number of i to j walks is the ij entry in A k

34 The walks proof  By symmetry A k (i,j)= A k (j,i) =W k (i,j)  Trace((A k ) 2 )=  A 2k (i,i)=  i  j A k (i,j) * A k (j,i) =  i  j W k (i,j) 2 =  2k ≥ 0 2k ≥ d 2k =  i  j W k (i,j) 2 =  2k ≥ 0 2k ≥ d 2k  By averaging W k (I,j) 2 ≥ d 2k /n 2 for some I,j for some I,j  Taking a sqre root we get W k (I,j)≥ d k /n QED QED

35 Expanders  The definition is roughly: for every S  V of size at most n/2 the number of edges leaving S is at least c*|S| for some constant c.  We are interested in d-regular expanders with d a universal constant.  The largest eignevalue is 0 =d. A graph is an expander iff 0 >> 1.  At best 1 is about d 1/2.

36 Random walks on expanders  The mixing time is very fast.  The diameter is O(log n) and the mixing time is O(log n) also. The proof uses the fact that the second eigenvalue is much smaller than the first  Remarkable application: say we want 1/2 k error probability. Need n bits to get probability 1/2  Random walk on expanders allows n+O(k) bits to get 1/2 k upper bound on the error.

37 Random walks a ‘real’ example  Brownian motion  Random drifting of particles suspended in a fluid (a liquid or a gas).  Has a mathematical model used to describe such random movements, which is often called a particle theory. particle theoryparticle theory  Imagine a stadium full of people and balons and the people pushing the balons randomly.


Download ppt "Random walks on undirected graphs and a little bit about Markov Chains Guy."

Similar presentations


Ads by Google