COMS 6998-06 Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.

Slides:



Advertisements
Similar presentations
Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Markov Models.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
Link Analysis: PageRank
Markov Chains 1.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
1 Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
More on Rankings. Query-independent LAR Have an a-priori ordering of the web pages Q: Set of pages that contain the keywords in the query q Present the.
Experiments with MATLAB Experiments with MATLAB Google PageRank Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University, Taiwan
Markov Chains Lecture #5
Algorithmic and Economic Aspects of Networks Nicole Immorlica.
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
Multimedia Databases SVD II. SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies SVD properties More case.
Link Analysis, PageRank and Search Engines on the Web
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property).
Markov Chains Chapter 16.
INDR 343 Problem Session
Monte Carlo Methods in Partial Differential Equations.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Entropy Rate of a Markov Chain
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
PageRank. s1s1 p 12 p 21 s2s2 s3s3 p 31 s4s4 p 41 p 34 p 42 p 13 x 1 = p 21 p 34 p 41 + p 34 p 42 p 21 + p 21 p 31 p 41 + p 31 p 42 p 21 / Σ x 2 = p 31.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Link Analysis Algorithms Page Rank Slides from Stanford CS345, slightly modified.
Functions of Random Variables
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Markov Chains and Random Walks
Information Retrieval Search Engine Technology (11)
Discrete-time markov chain (continuation)
Much More About Markov Chains
Green’s Function Monte Carlo Fall 2013
Random walks on undirected graphs and a little bit about Markov Chains
Information Retrieval (11)
Markov Chains Mixing Times Lecture 5
Link-Based Ranking Seminar Social Media Mining University UC3M
DTMC Applications Ranking Web Pages & Slotted ALOHA
6. Markov Chain.
Iterative Aggregation Disaggregation
Discrete-time markov chain (continuation)
Markov Chains Part 5.
CS723 - Probability and Stochastic Processes
Presentation transcript:

COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010

(8) Random walks and electrical networks

Random walks Stochastic process on a graph Transition matrix E Simplest case: a regular 1-D graph

Gambler’s ruin A has N pennies and B has M pennies. At each turn, one of them wins a penny with a probability of 0.5 Stop when one of them loses all his money.

Harmonic functions Harmonic functions: –P(0) = 0 –P(N) = 1 –P(x) = ½*p(x-1)+ ½*p(x+1), for 0<x<N –(in general, replace ½ with the bias in the walk)

Simple electrical circuit V(0)=0V(N)=1

Arbitrary resistances

The Maximum principle Let f(x) be a harmonic function on a sequence S. Theorem: –A harmonic function f(x) defined on S takes on its maximum value M and its minimum value m on the boundary. Proof: –Let M be the largest value of f. Let x be an element of S for which f(x)=M. Then f(x+1)=f(x-1)=M. If x-1 is still an interior point, continue with x-2, etc. In the worst case, reach x=0, for which f(x)=M.

The Uniqueness principle Let f(x) be a harmonic function on a sequence S. Theorem: –If f(x) and g(x) are harmonic functions on S such that f(x)=g(x) on the boundary points B, then f(x)=g(x) for all x. Proof: –Let h(x)=f(x)-g(x). Then, if x is an interior point, and h is harmonic. But h(x)=0 for x in B, and therefore, by the Maximum principle, its minimal and maximal values are both 0. Thus h(x)=0 for all x which proves that f(x)=g(x) for all x.

How to find the unique solution? Try a linear function: f(x)=x/N. This function has the following properties: –f(0)=0 –f(N)=1 –(f(x-1)+f(x+1))*1/2=x/N=f(x)

Reaching the boundary Theorem: –The random walker will reach either 0 or N. Proof: –Let h(x) be the probability that the walker never reaches the boundary. Then h(x)=1/2*h(x+1)+1/2*h(x-1), so h(x) is harmonic. Also h(0)=h(N)=0. According to the maximum principle, h(x)=0 for all x.

Number of steps to reach the boundary m(0)=0 m(N)=0 m(x)=1/2m(x+1)+1/2m(x-1) The expected number of steps until a one dimensional random walk goes up to b or down to -a is ab. Examples: (a=1,b=1); (a=2,b=2) (also: the displacement varies as sqrt(t) where t is time).

Fair games In the penny game, after one iteration, the expected fortune is ½(k-1)+1/2(k+1)=k Fair game = martingale Now if A has x pennies out of a total of N, his final fortune is: (1-p(x)).0+p(x).N=p(x).N Is the game fair if A can stop when he wants? No – e.g., stop playing when your fortune reaches $x.

(9) Method of relaxations and other methods for computing harmonic functions

2-D harmonic functions 0x 0 z1 y1

The original Dirichlet problem Distribution of temperature in a sheet of metal. One end of the sheet has temperature t=0, the other end: t=1. Laplace’s differential equation: This is a special (steady-state) case of the (transient) heat equation : In general, the solutions to this equation are called harmonic functions. U=1 U=0

Learning harmonic functions The method of relaxations –Discrete approximation. –Assign fixed values to the boundary points. –Assign arbitrary values to all other points. –Adjust their values to be the average of their neighbors. –Repeat until convergence. Monte Carlo method –Perform a random walk on the discrete representation. –Compute f as the probability of a random walk ending in a particular fixed point. Linear equation method Eigenvector methods –Look at the stationary distribution of a random walk

Monte Carlo solution Least accurate of all. Example: 10,000 runs for an accuracy of 0.01

Example x=1/4*(y+z+0+0) y=1/2*(x+1) z=1/3*(x+1+1) Ax=u X=A -1 u

Effective resistance Series: R=R1+R2 Parallel: C=C1+C2 1/R=1/R1+1/R R=R1R2/(R1+R2)

Example Doyle/Snell page 45

Electrical networks and random walks Ergodic (connected) Markov chain with transition matrix P 1 Ω 0.5 Ω a b c d w=Pw From Doyle and Snell 2000

Electrical networks and random walks 1 Ω 0.5 Ω a c d 1 V b v x is the probability that a random walk starting at x will reach a before reaching b. The random walk interpretation allows us to use Monte Carlo methods to solve electrical circuits.

Energy-based interpretation The energy dissipation through a resistor is Over the entire circuit, The flow from x to y is defined as follows: Conservation of energy

Thomson’s principle One can show that: The energy dissipated by the unit current flow (for v b =0 and for i a =1) is R eff. This value is the smallest among all possible unit flows from a to b (Thomson’s Principle)

Eigenvectors and eigenvalues An eigenvector is an implicit “direction” for a matrix where v (eigenvector) is non-zero, though λ (eigenvalue) can be any complex number in principle Computing eigenvalues:

Eigenvectors and eigenvalues Example: Det (A- I) = (-1- )*(- )-3*2=0 Then: =0; 1 =2; 2 =-3 For   Solutions: x 1 =x 2

Stochastic matrices Stochastic matrices: each row (or column) adds up to 1 and no value is less than 0. Example: The largest eigenvalue of a stochastic matrix E is real: λ 1 = 1. For λ 1, the left (principal) eigenvector is p, the right eigenvector = 1 In other words, G T p = p.

Markov chains A homogeneous Markov chain is defined by an initial distribution x and a Markov kernel E. Path = sequence (x 0, x 1, …, x n ). X i = x i-1 *E The probability of a path can be computed as a product of probabilities for each step i. Random walk = find X j given x 0, E, and j.

Stationary solutions The fundamental Ergodic Theorem for Markov chains [Grimmett and Stirzaker 1989] says that the Markov chain with kernel E has a stationary distribution p under three conditions: –E is stochastic –E is irreducible –E is aperiodic To make these conditions true: –All rows of E add up to 1 (and no value is negative) –Make sure that E is strongly connected –Make sure that E is not bipartite Example: PageRank [Brin and Page 1998]: use “teleportation”

Example This graph E has a second graph E’ (not drawn) superimposed on it: E’ is the uniform transition graph. t=0 t=1

Eigenvectors An eigenvector is an implicit “direction” for a matrix. Ev = λv, where v is non-zero, though λ can be any complex number in principle. The largest eigenvalue of a stochastic matrix E is real: λ 1 = 1. For λ 1, the left (principal) eigenvector is p, the right eigenvector = 1 In other words, E T p = p.

Computing the stationary distribution function PowerStatDist (E): begin p (0) = u; (or p (0) = [1,0,…0]) i=1; repeat p (i) = E T p (i-1) L = ||p (i) -p (i-1 )|| 1 ; i = i + 1; until L <  return p (i) end Solution for the stationary distribution Convergence rate is O(m)

Example t=0 t=1 t=10

More dimensions Polya’s theorem says that a 1-D random walk is recurrent and that a 2-D walk is also recurrent. However, a 3-D walk has a non-zero escape probability (p=0.66). omWalkConstants.htmlhttp://mathworld.wolfram.com/PolyasRand omWalkConstants.html