Presentation is loading. Please wait.

Presentation is loading. Please wait.

Brun’s Sieve Let B 1, …, B m be events, X i the indicator random variable for Bi and X = X 1 + … + X m the number of Bi that hold. Let there be a hidden.

Similar presentations


Presentation on theme: "Brun’s Sieve Let B 1, …, B m be events, X i the indicator random variable for Bi and X = X 1 + … + X m the number of Bi that hold. Let there be a hidden."— Presentation transcript:

1

2 Brun’s Sieve Let B 1, …, B m be events, X i the indicator random variable for Bi and X = X 1 + … + X m the number of Bi that hold. Let there be a hidden parameter n (so that actually m = m(n), B i = B i (n), X = X(n)) which will define the following o, O notation. Define S (r) = ∑ Pr[B i 1 Λ…Λ B i r ], the sum over all sets {i 1,…,i r }  {1,…,m}.

3 Theorem 8.3.1 Suppose there is a constant μso that E[X] = S (1) → μ and such that for every fixed r, E[X (r) / r!] = S (r) → μ r / r!. Then Pr[X = 0] → and indeed for every t Pr[X = t] →

4 Pr[X = r] ≤ S (r) = ∑ Pr[ ], where {i 1,…,i r }  {1,…,m}. The Inclusion-Exclusion Principle gives that Pr[X = 0] = Pr[ ] = 1 – S (1) + S (2) - …+(-1) r S (r) … Bonferroni’s inequality: Let P(E i ) be the probability that E i is true, and be the probability that at least one of E i,…, E n is true. Then

5 Proof. We do only the case t = 0. Fix є> 0. Choose s so that The Bonferroni Inequalities states that, in general, the inclusion-exclusion formula alternatively over and underestimates Pr[X = 0]. In particular, Select n o (the hidden variable) so that for n  n o, for 0 ≤ r ≤ 2s.

6 Proof(cont.) For such n Pr[X = 0] ≤ + є Similarly, taking the sum to 2s+1 we find no so that for n  n o Pr[X = 0] ≤ - є As є was arbitrary Pr[X = 0] →

7 Let G ~ G(n,p), the random graph and let EPIT represent the statement that every vertex lies in a triangle. Theorem 8.3.2 Let c > 0 be fixed and let p = p(n),μ=μ(n) satisfy p 3 = μ, = Then Pr[G(n,p) |= EPIT] =

8 Proof. First fix x  V(G). For each unordered y, z  V(G) – {x} let B xyz be the event that {x,y,z} is a triangle of G. Let C x be the event and X x be the corresponding indicator random variable. We use Janson’s Inequality to bound E[X x ] = Pr[C x ]. Here p = o(1) so є = o(1). as defined above.

9 Proof(cont.) Dependency xyz ~ xuv occurs if and only if the sets overlap (other than x). Hence Since. Thus Now define the number of vertices x not lying in a triangle. Then from Linearity of Expectation,

10 Proof(cont.) We need to show that the Poisson Paradigm applies to X. Fix r. Then the sum over all sets of vertices {x 1,…,x r }. All r-sets look alike so where x 1,…,x r are some particular vertices. But the conjunction over 1 ≤ i ≤ r and all y,z.

11 Proof(cont.) We apply Janson’s Inequality to this conjunction. Again є = p 3 = o(1). The number of {x i,y,z} is, the overcount coming from those triangles containing two(or three of the x i ). (Here it is crucial that r is fixed.) Thus As before Δ is p 5 times the number of pairs x i yz~ x j y’z’. There are O(rn 3 ) = O(n 3 ) terms with i = j and O(r 2 n 2 ) = O(n 2 ) terms with i  j so again Δ = o(1). Therefore and

12 Large Deviations Given a point in the probability space(i.e., a selection of R) we call an index set J  I a disjoint family (abbreviated disfam) if B j for every j  J. For no j, j’  J is j ~ j’. If, in addition, If j’  J and B j’ then j ~ j’ for some j  J. Then we call J a maximal disjoint family (abbreviated maxdisfam).

13 Lemma 8.4.1 With the above notation and for any integer s, Pr[there exists a disfam J, |J| = s] ≤ Proof. Let denote the sum over all s-sets J  I with no j ~ j’. Let denote the sum over ordered s-tuples (j 1,…, j s ) with {j 1,…, j s } forming such a J. Let denote the sum over all ordered s-tuples (j 1,…, j s ).

14 Proof(cont.) Pr[there exists a disfam J, |J| = s]

15 For smaller s we look at the further condition of J being a maxidisfam. To that end we let μ s denote the minimum, over all j 1, …, j s of,the sum taken over all i  I except those i with i ~ j l for some 1≤ l ≤ s. In application s will be small (otherwise we use Lemma 8.4.1) and μ s will be close to μ. For some applications it is convenient to set and note that μ s >= μ – sv.

16 Lemma 8.4.2 With the above notation and for any integer s, Pr[there exists a maxdisfam J, |J| = s] ≤ Proof. As in Lemma 8.4.1 we bound this probability by of J = {j 1, …, j s } being a maxdisfam. For this to occur J must first be a disfam and then, where is the conjunction over all i  I except those with i ~ j l for some 1 ≤ l ≤ s.

17 Proof(cont.) We apply Janson’s Inequality to give an upper bound to.The associated values satisfy the latter since has simply fewer addends. Thus and

18

19

20 When Δ = o(1) and vμ = o(1) or, more generally, μ 3μ = μ + o(1), then Lemma 8.4.2 gives a close approximation to the Poisson Distribution since Pr[there exists a maxdisfam J, |J| = s] For s ≤ 3μ and the probability is quite small for larger s by Lemma 8.4.1

21


Download ppt "Brun’s Sieve Let B 1, …, B m be events, X i the indicator random variable for Bi and X = X 1 + … + X m the number of Bi that hold. Let there be a hidden."

Similar presentations


Ads by Google