Download presentation

Presentation is loading. Please wait.

Published byAlejandro Rollins Modified over 4 years ago

1
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS

2
Plan: 3. Introduce main tool – Thm by [BKT,K] 4.* Prove our main theorem. 1. Discuss problem and model 2. State our result

3
Randomness Extraction Randomness is central to CS (c.f., randomized algorithms, cryptography, distributed computing) How do you execute randomized algorithms and protocols? Solution: sample some random physical data (coin tossing, thermal noise, hard disk movement,…) Problem: data from physical sources is not a sequence of ideal coin tosses.

4
Randomness Extractors Definition: E:{0,1} n {0,1} 0.1k is an extractor if 8 r.v. X with entropy ¸ k, E(X) is close to U 0.1k Idea: XE high entropy data extractor uniform output randomized algorithm / protocol

5
Randomness Extractors Definition: E:{0,1} n {0,1} 0.1k is an extractor if 8 r.v. X with entropy ¸ k, E(X) is close to U 0.1k Problem: No extractor exists. Thm: 8 E:{0,1} n {0,1} 0.1k theres a r.v. X w/ entropy ¸ n-1 s.t. first bit of E(X) is constant. Proof Sketch: Assume wlog |{ x | E 1 (x)=0 }| ¸ 2 n /2 let X be the uniform dist over this set.

6
Solution 1: Seeded Extractors Def: E:{0,1} n £ {0,1} d {0,1} 0.1k is a (seeded) extractor if 8 r.v. X w/ min-entropy ¸ k | E(X,U d ) – U 0.1k | 1 < 1/100. Many exciting results, applications and connections [Z,NZ,Ta,Tr,RSW,STV,TSZ,SU,…]. Thm [LRVW]: For every n,k theres a seeded extractor with d=O(log n) Corollary: Any probabilistic algorithm can be simulated w/ weak random source + polynomial overhead. X has min-entropy ¸ k (denoted H(X) ¸ k) if 8 x Pr[ X=x ] · 2 -k. Every such dist is convex comb of flat dist – uniform dist on set of size ¸ 2 k. In this talk: entropy = min-entropy Definition: E:{0,1} n {0,1} 0.1k is an extractor if 8 r.v. X with entropy ¸ k, E(X) is close to U 0.1k

7
Solution 1: Seeded Extractors Thm [LRVW]: For every n,k theres a seeded extractor with d=O(log n) Corollary: Any probabilistic algorithm can be simulated w/ weak random source + polynomial overhead. Question: What about other uses of randomness? For example, can we use this for cryptography? Answer: No! For example, if we concatenate encryptions according to all possible seeds this wont be secure! Need to use seedless extractors!

8
Seedless Extractors Idea: Bypass impossibility result by making additional assumption on the high entropy input. In this work: We assume that input comes from few independent distributions ([CG]). Long history and many results [vN,P,B,SV,CW,TV,KZ,..] Def: E:{0,1} nc {0,1} 0.1k is a c-sample extractor if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ k | E(X 1,…,X c ) – U 0.1k | 1 < 1/100 2 - (k) Motivation: mathematically clean and plausible model.

9
Optimal (non-explicit) construction: c=2, every k ¸ (log n) Def: E:{0,1} nc {0,1} 0.1k is a c-sample extractor if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ k | E(X 1,…,X c ) – U 0.1k | 1 < 2 - (k) Previous best explicit construction [SV,V,CG,ER,DEOR]: c=2, every k ¸ (1+ ) n/2 Obtained by variants of following 1-bit output extractor: E(x,y) = Obtained by variants of following 1-bit output extractor: E(x,y) = Problematic, since natural entropy sources often have entropy less than n/2.

10
Def: E:{0,1} nc {0,1} 0.1k is a c-sample extractor if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ k | E(X 1,…,X c ) – U 0.1k | 1 < 2 - (k) Our Result: For every >0 c=poly(1/ ), k= n Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) Optimal (non-explicit) construction: c=2, every k ¸ (log n) Previous best explicit construction [SV,V,CG,ER,DEOR]: c=2, every k ¸ (1+ ) n/2

11
Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) Plan: 3. Introduce main tool – Thm by [BKT,K] 4. Prove our main theorem. 1. Discuss problem and model 2. State our result Show BKT (almost) immediately implies dispersers.

12
Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) Our main tool is the following result: Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ min{ |A| 1+, |F| } 1. Finite field analog of a theorem by [ES]. 2. Note Thm 1 would be false if F had non-trivial subfields. 3. Note if A is arithmetic (resp. geometric) sequence, then |A+A| (resp. |A ¢ A|) is small. A+A = { a+b | a,b 2 A } A ¢ A = { a ¢ b | a,b 2 A }

13
How is this related to extractors? Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Disperser Lemma [BKT]: Let >0 and F a prime field, then 9 c=poly(1/ ) and poly-time E:F c F s.t. if X 1,…,X c µ F satisfy |X i | ¸ |F|, then E(X 1,…,X c ) = F Corollary: Identify {0,1} n w/ prime field F of size 2 n. Then, we get poly-time E s.t. if r.v.s X 1,…,X c have entropy ¸ n, then Supp{E(X 1,…,X c )}={0,1} n This is called a disperser.

14
How is this related to extractors? Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Disperser Lemma [BKT]: Let >0 and F a prime field, then 9 c=poly(1/ ) and poly-time E:F c F s.t. if X 1,…,X c µ F satisfy |X i | ¸ |F|, then E(X 1,…,X c ) = F Proof: Use lemma of Rusza to get asymmetric version of Thm 1. Lemma [R,N]: If A,B µ G w/ |A|=|B|=M, and |A B| · M 1+, then |A A| · M 1+O( ) Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ We let E be recursive application of a,b,c a ¢ b+c with depth O(log(1/ )). |A ¢ A| large ) |A ¢ B| large ) |A ¢ B+C| large |A+A| large ) |A+C| large ) |A ¢ B+C| large

15
¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + a 1, a 2, … a poly(1/delta)......... ¢ + Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+

16
Plan: 3. Introduce main tool – Thm by [BKT,K] 4. Prove our main theorem. 1. Discuss problem and model 2. State our result Show BKT (almost) immediately implies dispersers.

17
Distributional Version of [BKT] Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Main Lemma ) Main Theorem. Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ ( The distribution A ¢ B+C assigns to x the prob that a ¢ b+c=x with a 2 R A, b 2 R B, c 2 R C )

18
Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Main Lemma ) Main Theorem. ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + a 1, a 2, …......... ¢ + a poly(1/delta)

19
Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Plan: Prove Main Lemma by reducing to [BKT]. We use magic lemmas of Gowers & Rusza in the reduction.

20
Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Detailed Plan: 1. Introduce collision probability – a different entropy measure. 2. Rephrase Main Lemma in terms of C.P. 3. Show naïve approach to proving, and show counterexample 4. Use Gowers & Ruszas lemmas to show counterexample essentially captures all cases

21
cp(X) = Pr x,x X [ x= x ] = x p x 2 Collision Probability Fact 3: If X is convex combination of X 1,…,X m then cp(X) · max { cp(X 1 ), …, cp(X m ) } Fact 1: If H(X) ¸ k then cp(X) · 2 -k Fact 2: If cp(X) · 2 -k(1+ ) then is 2 - k/2 close to having min-entropy at least k(1+ /2). Notation: If D is r.v., then the 2-entropy of D is H 2 (D) = log(1/cp(D)) Fact 1 + Fact 2 ) H 2 (D) ~ H(D)

22
Main Lemma: 9 >0 s.t. for prime field F, dists A,B,C µ F, (with H(A)=H(B)=H(C), the distribution A ¢ B+C is 2 - H(A) close to entropy ¸ (1+ )H(A) Main Lemma (CP version): 9 >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having 2-entropy ¸ (1+ )log |A| Thus, it is sufficient to prove CP version.

23
Detailed Plan: 1. Introduce collision probability – a different entropy measure. 2. Rephrase Main Lemma in terms of C.P. 3. Show naïve approach to proving, and show counterexample 4. Use Gowers and Ruszas lemmas to show counterexample essentially captures all cases Main Lemma (CP version): 9 >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having 2-entropy ¸ (1+ )log |A|

24
Naïve Approach Prove direct analog to BKT Conjecture: 9 >0 s.t. for prime F, and set A µ F max { H 2 (A+A), H 2 (A ¢ A) } ¸ (1+ )log|A| Counter Example: A=A G [ A A A G - geometric seq. A A - (disjoint) arithmetic seq. However, in this case H 2 (A ¢ A+A) ¸ (1+ )log |A| cp(A+A),cp(A ¢ A) ¸ 1/10|A| hence H 2 (A+A), H 2 (A ¢ A) · log|A|+O(1)

25
Naïve Approach Counter Example: A=A G [ A A A G - geometric seq. A A - (disjoint) arithmetic seq. Claim: H 2 (A ¢ A + A) ¸ (1+ )log |A| Sketch: A ¢ A+A is convex comb of A A ¢ A+A and A G ¢ A+A. cp(A A ¢ A+A) · cp(A A ¢ A) which is low since A ¢ is an arithmetic seq cp(A A ¢ A+A) · cp(A A ¢ A) which is low since A ¢ is an arithmetic seq A G ¢ A+A is convex comb of A G a+A but cp(A G a+A) is low since A G a is a geometric seq A G ¢ A+A is convex comb of A G a+A but cp(A G a+A) is low since A G a is a geometric seq

26
Detailed Plan: 1. Introduce collision probability – a different entropy measure. 2. Rephrase Main Lemma in terms of C.P. 3. Show naïve approach to proving, and show counterexample 4. Use Gowers and Ruszas lemmas to show counterexample essentially captures all cases Main Lemma: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having c.p. · |A| -(1+ Main Lemma: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having c.p. · |A| -(1+

27
Proof of Main Lemma (Loose) Notations: A number ¸ M 1+ is called large A number ¸ M 1- ( ) is called not-too-small A distribution D has high 2-entropy if H 2 (D) ¸ (1+ )log M Main Lemma (CP version): 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having 2-entropy ¸ (1+ )log |A| Our Goal: Prove that A ¢ B+C is close to having high 2- entropy. (i.e., it is close to having c.p. · 1/M 1+ ) Let M=|A|=|B|=|C| and fix some >0 (e.g., BKTs divided by 100)

28
Tools: Thm 1 [BKT,K]: If A µ F is not too small then either |A ¢ A| or |A+A| is large. Lemma [R,N]: If |A A| is large then |A B| is large. Magic Lemma [G,BS]: Either H 2 (A B) is large or 9 not-too-small subsets A µ A, B µ B s.t. |A B| is not large.

29
Cor [BKT+R]: If 9 not-too-small B s.t. |A ¢ B| is not large then |A+C| is large 8 not-too-small C. A First Distributional Analog: Proof: |A ¢ B| is not large ) |A ¢ A| is not large [R] ) |A+A| is large [BKT] ) |A+C| is large [R]. Natural Analog: If 9 not-too-small B s.t. H 2 (A ¢ B) is not large then H 2 (A+C) is large 8 not-too-small C. This is false: e.g., A=B=C= A G [ A A However, the following is true: PF Lemma: If 9 not-too-small B s.t. |A ¢ B| is not large then H 2 (A+C) is large 8 not-too-small C.

30
Def: A not-too-small set A µ F is plus friendly if H 2 (A+C) is large 8 not-too-small set C. Proof: If H 2 (A+C) is not large then by Gowerss Lemma 9 not-too-small A µ A, C µ C s.t. |A+C| is not large. PF Lemma: If 9 not-too-small B s.t. |A ¢ B| is not large then H 2 (A+C) is large 8 not-too-small C. By Ruszas lemma |A+A| is not large ) by BKT |A ¢ A| is large. Since A µ A, |A ¢ A| is also large ) by Ruszas lemma |A ¢ B| is large – contradiction! 1. A plus-friendly, b 2 F ) Ab plus-friendly. 2. A, A plus-friendly, disjoint ) A [ A plus-friendly.

31
Our Goal: Prove A ¢ B+C close to having low c.p.. 1+2 ) contradiction since A ¢ B+C is M - close to convex comb of A + ¢ B+C and A ¢ ¢ B+C, but a) H 2 (A + ¢ B+C) is large since convex comb of A + b+C and A + b is plus-friendly. b) H 2 (A ¢ ¢ B+C) is large since convex comb of A ¢ B+c which are permutations of A ¢ B. Assume H 2 (A ¢ B+C) not large. Well show A=A + [ A ¢ s.t. A +,A ¢ are disjoint and 1) A + is plus friendly (or A + is empty) 2) H 2 (A ¢ ¢ B) is large (or |A ¢ | · M 1- )

32
Our Goal: Prove A ¢ B+C close to having low c.p.. Assume H 2 (A ¢ B+C) not large. Well show A=A + [ A ¢ s.t. A +,A ¢ disjoint and 1) A + is plus friendly (or A + is empty) 2) H 2 (A ¢ ¢ B) is large (or |A ¢ | · M 1- ) We build partition iteratively. Initially A + = ;, A ¢ =A. Assume A ¢ is not-too-small (o/w were done). By Gowers lemma, 9 not-too-small subsets A µ A ¢, B µ B s.t. |A ¢ B| not large. By PF Lemma A is plus-friendly, remove A from A ¢ and add it to A +. Assume H 2 (A ¢ ¢ B) is not large (o/w were done).

33
Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) This finishes the proof of the Main Lemma and hence the Main Theorem. Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C) 0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)<0.8log|F|), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) 2 -10n

34
Another Result: A disperser for the case that all samples come from same distribution, which only requires (log n) entropy (using [EH]).

35
Open Problems Extractors/Dispersers with lower entropy requirement (k=n (1) or even k= (log n) ) Extractors/Dispersers with lower entropy requirement (k=n (1) or even k= (log n) ) Improvement for the case of two samples (related to constructing Ramsey graphs). Improvement for the case of two samples (related to constructing Ramsey graphs). More applications of results/techniques. More applications of results/techniques.

Similar presentations

OK

Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.

Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google