Presentation is loading. Please wait.

Presentation is loading. Please wait.

Week 4 – Random Graphs Dr. Anthony Bonato Ryerson University AM8002 Fall 2014.

Similar presentations


Presentation on theme: "Week 4 – Random Graphs Dr. Anthony Bonato Ryerson University AM8002 Fall 2014."— Presentation transcript:

1 Week 4 – Random Graphs Dr. Anthony Bonato Ryerson University AM8002 Fall 2014

2 Complex Networks2 Random graphs Paul ErdősAlfred Rényi

3 Complex Networks3

4

5 5 G(n,p) random graph model (Erdős, Rényi, 63) p = p(n) a real number in (0,1), n a positive integer G(n,p): probability space on graphs with nodes {1,…,n}, two nodes joined independently and with probability p 5 123 4

6 Formal definition n a positive integer p a real number in [0,1] G(n,p) is a probability space on labelled graphs with vertex set V = [n] = {1,2,…,n} such that NB: p can be a function of n –today, p is a constant

7 Properties of G(n,p) consider some graph G in G(n,p) the graph G could be any n-vertex graph, so not much can be said about G with certainty some properties of G, however, are likely to hold we are interested in properties that occur with high probability when n is large

8 A.a.s. an event A n happens asymptotically almost surely (a.a.s.) in G(n,p) if it holds there with probability tending to 1 as n→∞ Theorem 4.1. A.a.s. G in G(n,p) is diameter 2. just say: A.a.s. G(n,p) has diameter 2.

9 First moment method in G(n,p), all graph parameters: |E(G)|, γ(G), ω(G), … become random variables we focus on computing the averages of these parameters or expectation

10 Discussion Calculate the expected number of edges in G(n,p). use of expectation when studying random graphs is sometimes referred to as the first moment method

11 11 Degrees and diameter Theorem 4.2: A.a.s. the degree of each vertex of G in G(n,p) equals concentration: binomial distribution

12 Markov’s inequality Theorem 4.3 (Markov’s inequality) For any non-negative random variable X and t > 0, we have that

13 Chernoff bound Theorem 4.4 (Chernoff bound) Let X be a binomially distributed random variable on G(n,p) with E[x] = np. Then for ε ≤ 3/2 we have that

14

15 Martingales let X and Y be random variables on the same probability space the conditional mass function of X given Y = y is defined by f x|y (x|y)=Pr[X=x | Y=y] note that for a fixed y, f x|y (x|y) is a function of x the conditional expection of X when Y=y is given by its expectation let g(x) = E[X | Y=y]; g is the conditional expectation of X on Y, written E[X|Y]

16 Intuition E[X|Y] is the expected value of X assuming Y is known note that E[X|Y] is a random variable –precise value depends on the value of Y

17 Definition a martingale is a sequence (X 0,X 1,...,X t ) of random variables over a given probabiltiy space such that for all i > 0, E[X i | X 0,X 1,...,X i-1 ] = X i-1

18 Example a gambler starts with $100 she flips a fair coin t times; when the coin is heads, she wins $1; tails, she loses $1. let X i denote the gamblers bankroll after i flips then (X 0,X 1,...,X t ) is a martingale, since: E[X i | X 0,X 1,...,X i-1 ] = 1/2(X i-1 +1)+1/2(X i-1 -1) = X i

19 Doob martingales let A, Z 1,..., Z t be random variables define X 0 = E[A], X i = E[A| Z 1,..., Z i ] for 1 ≤ i ≤ t can be shown that (X 0,X 1,...,X t ) is a martingale; called the Doob martingale Idea: A = f(Z 1,..., Z t ) is some function f, with X 0 = E[A] and X t = A each Z i is “revealed” more and more until we know everything and hence, A

20 Azuma-Hoeffding inequality Theorem 4.5 Let (X 0,X 1,...,X t ) be a martingale such that |X i+1 – X i | ≤ c for all i (c-Lipschitz condition). Then for all λ > 0, concentration inequality

21 Example: vertex colouring let A = χ(G(n,p)), and let Z i contains the information on the presence/absence of edges ij with j < i Doob martingale here is called the vertex- exposure martingale –reveal one vertex at a time

22 Concentration of chromatic number Theorem 4.6 For G in G(n,p) and all real λ >0, hence, χ(G(n,p)) is concentrated around its expectation; proved before anyone knew E(χ(G(n,p)))!

23 23 Aside: evolution of G(n,p) think of G(n,p) as evolving from a co-clique to clique as p increases from 0 to 1 at p=1/n, Erdős and Rényi observed something interesting happens a.a.s.: –with p = c/n, with c < 1, the graph is disconnected with all components trees, the largest of order Θ(log(n)) –as p = c/n, with c > 1, the graph becomes connected with a giant component of order Θ(n) Erdős and Rényi called this the double jump physicists call it the phase transition: it is similar to phenomena like freezing or boiling Complex Networks

24 24


Download ppt "Week 4 – Random Graphs Dr. Anthony Bonato Ryerson University AM8002 Fall 2014."

Similar presentations


Ads by Google