Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.

Slides:



Advertisements
Similar presentations
Merkle Puzzles Are Optimal
Advertisements

Randomness Conductors (II) Expander Graphs Randomness Extractors Condensers Universal Hash Functions
Parikshit Gopalan Georgia Institute of Technology Atlanta, Georgia, USA.
Truthful Mechanisms for Combinatorial Auctions with Subadditive Bidders Speaker: Shahar Dobzinski Based on joint works with Noam Nisan & Michael Schapira.
Invertible Zero-Error Dispersers and Defective Memory with Stuck-At Errors Ariel Gabizon Ronen Shaltiel.
An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Deterministic Extractors for Small Space Sources Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Strict Polynomial-Time in Simulation and Extraction Boaz Barak & Yehuda Lindell.
Foundations of Cryptography Lecture 3 Lecturer: Moni Naor.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson.
Direct Product : Decoding & Testing, with Applications Russell Impagliazzo (IAS & UCSD) Ragesh Jaiswal (Columbia) Valentine Kabanets (SFU) Avi Wigderson.
How to get more mileage from randomness extractors Ronen Shaltiel University of Haifa.
Deterministic extractors for bit- fixing sources by obtaining an independent seed Ariel Gabizon Ran Raz Ronen Shaltiel Seedless.
Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Lecture 2 Based on Chapter 1, Weiss. Mathematical Foundation Series and summation: ……. N = N(N+1)/2 (arithmetic series) 1 + r+ r 2 + r 3 +………r.
Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer:
1 Efficient Pseudorandom Generators from Exponentially Hard One-Way Functions Iftach Haitner, Danny Harnik, Omer Reingold.
Lecture 3 Universal TM. Code of a DTM Consider a one-tape DTM M = (Q, Σ, Γ, δ, s). It can be encoded as follows: First, encode each state, each direction,
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Circuit and Communication Complexity. Karchmer – Wigderson Games Given The communication game G f : Alice getss.t. f(x)=1 Bob getss.t. f(y)=0 Goal: Find.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
The Communication Complexity of Approximate Set Packing and Covering
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Foundations of Cryptography Lecture 4 Lecturer: Moni Naor.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
Testing of Clustering Noga Alon, Seannie Dar Michal Parnas, Dana Ron.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
3-source extractors, bi-partite Ramsey graphs, and other explicit constructions Boaz barak rOnen shaltiel Benny sudakov avi wigderson Joint work with GUY.
1 Shira Zucker Ben-Gurion University of the Negev Advisors: Prof. Daniel Berend Prof. Ephraim Korach Anticoloring for Toroidal Grids.
Princeton University COS 433 Cryptography Fall 2005 Boaz Barak COS 433: Cryptography Princeton University Fall 2005 Boaz Barak Lecture 3: Computational.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Mathematics Review Exponents Logarithms Series Modular arithmetic Proofs.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
CS151 Complexity Theory Lecture 13 May 11, Outline proof systems interactive proofs and their power Arthur-Merlin games.
Private Approximation of Search Problems Amos Beimel Paz Carmi Kobbi Nissim Enav Weinreb (Technion)
The sum-product theorem and applications Avi Wigderson School of Mathematics Institute for Advanced Study.
Copyright © Zeph Grunschlag, Induction Zeph Grunschlag.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
Pseudo-random generators Talk for Amnon ’ s seminar.
The sum-product theorem and applications Avi Wigderson School of Mathematics Institute for Advanced Study.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Approximation Algorithms based on linear programming.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
1 IAS, Princeton ASCR, Prague. The Problem How to solve it by hand ? Use the polynomial-ring axioms ! associativity, commutativity, distributivity, 0/1-elements.
Computability and Complexity
Vapnik–Chervonenkis Dimension
Selection Selection 1 Quick-Sort Quick-Sort 10/30/16 13:52
Pseudorandomness when the odds are against you
Additive Combinatorics and its Applications in Theoretical CS
Complexity 6-1 The Class P Complexity Andrei Bulatov.
The Curve Merger (Dvir & Widgerson, 2008)
3.5 Minimum Cuts in Undirected Graphs
The Zig-Zag Product and Expansion Close to the Degree
CS151 Complexity Theory Lecture 7 April 23, 2019.
Impossibility of SNARGs
Cryptography Lecture 18.
On Derandomizing Algorithms that Err Extremely Rarely
Presentation transcript:

Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS

Plan: 3. Introduce main tool – Thm by [BKT,K] 4.* Prove our main theorem. 1. Discuss problem and model 2. State our result

Randomness Extraction Randomness is central to CS (c.f., randomized algorithms, cryptography, distributed computing) How do you execute randomized algorithms and protocols? Solution: sample some random physical data (coin tossing, thermal noise, hard disk movement,…) Problem: data from physical sources is not a sequence of ideal coin tosses.

Randomness Extractors Definition: E:{0,1} n {0,1} 0.1k is an extractor if 8 r.v. X with entropy ¸ k, E(X) is close to U 0.1k Idea: XE high entropy data extractor uniform output randomized algorithm / protocol

Randomness Extractors Definition: E:{0,1} n {0,1} 0.1k is an extractor if 8 r.v. X with entropy ¸ k, E(X) is close to U 0.1k Problem: No extractor exists. Thm: 8 E:{0,1} n {0,1} 0.1k theres a r.v. X w/ entropy ¸ n-1 s.t. first bit of E(X) is constant. Proof Sketch: Assume wlog |{ x | E 1 (x)=0 }| ¸ 2 n /2 let X be the uniform dist over this set.

Solution 1: Seeded Extractors Def: E:{0,1} n £ {0,1} d {0,1} 0.1k is a (seeded) extractor if 8 r.v. X w/ min-entropy ¸ k | E(X,U d ) – U 0.1k | 1 < 1/100. Many exciting results, applications and connections [Z,NZ,Ta,Tr,RSW,STV,TSZ,SU,…]. Thm [LRVW]: For every n,k theres a seeded extractor with d=O(log n) Corollary: Any probabilistic algorithm can be simulated w/ weak random source + polynomial overhead. X has min-entropy ¸ k (denoted H(X) ¸ k) if 8 x Pr[ X=x ] · 2 -k. Every such dist is convex comb of flat dist – uniform dist on set of size ¸ 2 k. In this talk: entropy = min-entropy Definition: E:{0,1} n {0,1} 0.1k is an extractor if 8 r.v. X with entropy ¸ k, E(X) is close to U 0.1k

Solution 1: Seeded Extractors Thm [LRVW]: For every n,k theres a seeded extractor with d=O(log n) Corollary: Any probabilistic algorithm can be simulated w/ weak random source + polynomial overhead. Question: What about other uses of randomness? For example, can we use this for cryptography? Answer: No! For example, if we concatenate encryptions according to all possible seeds this wont be secure! Need to use seedless extractors!

Seedless Extractors Idea: Bypass impossibility result by making additional assumption on the high entropy input. In this work: We assume that input comes from few independent distributions ([CG]). Long history and many results [vN,P,B,SV,CW,TV,KZ,..] Def: E:{0,1} nc {0,1} 0.1k is a c-sample extractor if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ k | E(X 1,…,X c ) – U 0.1k | 1 < 1/ (k) Motivation: mathematically clean and plausible model.

Optimal (non-explicit) construction: c=2, every k ¸ (log n) Def: E:{0,1} nc {0,1} 0.1k is a c-sample extractor if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ k | E(X 1,…,X c ) – U 0.1k | 1 < 2 - (k) Previous best explicit construction [SV,V,CG,ER,DEOR]: c=2, every k ¸ (1+ ) n/2 Obtained by variants of following 1-bit output extractor: E(x,y) = Obtained by variants of following 1-bit output extractor: E(x,y) = Problematic, since natural entropy sources often have entropy less than n/2.

Def: E:{0,1} nc {0,1} 0.1k is a c-sample extractor if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ k | E(X 1,…,X c ) – U 0.1k | 1 < 2 - (k) Our Result: For every >0 c=poly(1/ ), k= n Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) Optimal (non-explicit) construction: c=2, every k ¸ (log n) Previous best explicit construction [SV,V,CG,ER,DEOR]: c=2, every k ¸ (1+ ) n/2

Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) Plan: 3. Introduce main tool – Thm by [BKT,K] 4. Prove our main theorem. 1. Discuss problem and model 2. State our result Show BKT (almost) immediately implies dispersers.

Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) Our main tool is the following result: Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ min{ |A| 1+, |F| } 1. Finite field analog of a theorem by [ES]. 2. Note Thm 1 would be false if F had non-trivial subfields. 3. Note if A is arithmetic (resp. geometric) sequence, then |A+A| (resp. |A ¢ A|) is small. A+A = { a+b | a,b 2 A } A ¢ A = { a ¢ b | a,b 2 A }

How is this related to extractors? Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Disperser Lemma [BKT]: Let >0 and F a prime field, then 9 c=poly(1/ ) and poly-time E:F c F s.t. if X 1,…,X c µ F satisfy |X i | ¸ |F|, then E(X 1,…,X c ) = F Corollary: Identify {0,1} n w/ prime field F of size 2 n. Then, we get poly-time E s.t. if r.v.s X 1,…,X c have entropy ¸ n, then Supp{E(X 1,…,X c )}={0,1} n This is called a disperser.

How is this related to extractors? Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and set A µ F, max{ |A+A|, |A ¢ A| } ¸ |A| 1+ Disperser Lemma [BKT]: Let >0 and F a prime field, then 9 c=poly(1/ ) and poly-time E:F c F s.t. if X 1,…,X c µ F satisfy |X i | ¸ |F|, then E(X 1,…,X c ) = F Proof: Use lemma of Rusza to get asymmetric version of Thm 1. Lemma [R,N]: If A,B µ G w/ |A|=|B|=M, and |A ­ B| · M 1+, then |A ­ A| · M 1+O( ) Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ We let E be recursive application of a,b,c a ¢ b+c with depth O(log(1/ )). |A ¢ A| large ) |A ¢ B| large ) |A ¢ B+C| large |A+A| large ) |A+C| large ) |A ¢ B+C| large

¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + a 1, a 2, … a poly(1/delta) ¢ + Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+

Plan: 3. Introduce main tool – Thm by [BKT,K] 4. Prove our main theorem. 1. Discuss problem and model 2. State our result Show BKT (almost) immediately implies dispersers.

Distributional Version of [BKT] Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Main Lemma ) Main Theorem. Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ Thm 1 [BKT,K]: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F, (with |A|=|B|=|C|) |A ¢ B+C| ¸ |A| 1+ ( The distribution A ¢ B+C assigns to x the prob that a ¢ b+c=x with a 2 R A, b 2 R B, c 2 R C )

Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Main Lemma ) Main Theorem. ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + ¢ + a 1, a 2, … ¢ + a poly(1/delta)

Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Plan: Prove Main Lemma by reducing to [BKT]. We use magic lemmas of Gowers & Rusza in the reduction.

Our Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) Detailed Plan: 1. Introduce collision probability – a different entropy measure. 2. Rephrase Main Lemma in terms of C.P. 3. Show naïve approach to proving, and show counterexample 4. Use Gowers & Ruszas lemmas to show counterexample essentially captures all cases

cp(X) = Pr x,x X [ x= x ] = x p x 2 Collision Probability Fact 3: If X is convex combination of X 1,…,X m then cp(X) · max { cp(X 1 ), …, cp(X m ) } Fact 1: If H(X) ¸ k then cp(X) · 2 -k Fact 2: If cp(X) · 2 -k(1+ ) then is 2 - k/2 close to having min-entropy at least k(1+ /2). Notation: If D is r.v., then the 2-entropy of D is H 2 (D) = log(1/cp(D)) Fact 1 + Fact 2 ) H 2 (D) ~ H(D)

Main Lemma: 9 >0 s.t. for prime field F, dists A,B,C µ F, (with H(A)=H(B)=H(C), the distribution A ¢ B+C is 2 - H(A) close to entropy ¸ (1+ )H(A) Main Lemma (CP version): 9 >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having 2-entropy ¸ (1+ )log |A| Thus, it is sufficient to prove CP version.

Detailed Plan: 1. Introduce collision probability – a different entropy measure. 2. Rephrase Main Lemma in terms of C.P. 3. Show naïve approach to proving, and show counterexample 4. Use Gowers and Ruszas lemmas to show counterexample essentially captures all cases Main Lemma (CP version): 9 >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having 2-entropy ¸ (1+ )log |A|

Naïve Approach Prove direct analog to BKT Conjecture: 9 >0 s.t. for prime F, and set A µ F max { H 2 (A+A), H 2 (A ¢ A) } ¸ (1+ )log|A| Counter Example: A=A G [ A A A G - geometric seq. A A - (disjoint) arithmetic seq. However, in this case H 2 (A ¢ A+A) ¸ (1+ )log |A| cp(A+A),cp(A ¢ A) ¸ 1/10|A| hence H 2 (A+A), H 2 (A ¢ A) · log|A|+O(1)

Naïve Approach Counter Example: A=A G [ A A A G - geometric seq. A A - (disjoint) arithmetic seq. Claim: H 2 (A ¢ A + A) ¸ (1+ )log |A| Sketch: A ¢ A+A is convex comb of A A ¢ A+A and A G ¢ A+A. cp(A A ¢ A+A) · cp(A A ¢ A) which is low since A ¢ is an arithmetic seq cp(A A ¢ A+A) · cp(A A ¢ A) which is low since A ¢ is an arithmetic seq A G ¢ A+A is convex comb of A G a+A but cp(A G a+A) is low since A G a is a geometric seq A G ¢ A+A is convex comb of A G a+A but cp(A G a+A) is low since A G a is a geometric seq

Detailed Plan: 1. Introduce collision probability – a different entropy measure. 2. Rephrase Main Lemma in terms of C.P. 3. Show naïve approach to proving, and show counterexample 4. Use Gowers and Ruszas lemmas to show counterexample essentially captures all cases Main Lemma: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having c.p. · |A| -(1+ Main Lemma: 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having c.p. · |A| -(1+

Proof of Main Lemma (Loose) Notations: A number ¸ M 1+ is called large A number ¸ M 1- ( ) is called not-too-small A distribution D has high 2-entropy if H 2 (D) ¸ (1+ )log M Main Lemma (CP version): 9 absolute constant >0 s.t. for prime field F, and sets A,B,C µ F (with |A|=|B|=|C| ), the distribution A ¢ B+C is |A| - close to having 2-entropy ¸ (1+ )log |A| Our Goal: Prove that A ¢ B+C is close to having high 2- entropy. (i.e., it is close to having c.p. · 1/M 1+ ) Let M=|A|=|B|=|C| and fix some >0 (e.g., BKTs divided by 100)

Tools: Thm 1 [BKT,K]: If A µ F is not too small then either |A ¢ A| or |A+A| is large. Lemma [R,N]: If |A ­ A| is large then |A ­ B| is large. Magic Lemma [G,BS]: Either H 2 (A ­ B) is large or 9 not-too-small subsets A µ A, B µ B s.t. |A ­ B| is not large.

Cor [BKT+R]: If 9 not-too-small B s.t. |A ¢ B| is not large then |A+C| is large 8 not-too-small C. A First Distributional Analog: Proof: |A ¢ B| is not large ) |A ¢ A| is not large [R] ) |A+A| is large [BKT] ) |A+C| is large [R]. Natural Analog: If 9 not-too-small B s.t. H 2 (A ¢ B) is not large then H 2 (A+C) is large 8 not-too-small C. This is false: e.g., A=B=C= A G [ A A However, the following is true: PF Lemma: If 9 not-too-small B s.t. |A ¢ B| is not large then H 2 (A+C) is large 8 not-too-small C.

Def: A not-too-small set A µ F is plus friendly if H 2 (A+C) is large 8 not-too-small set C. Proof: If H 2 (A+C) is not large then by Gowerss Lemma 9 not-too-small A µ A, C µ C s.t. |A+C| is not large. PF Lemma: If 9 not-too-small B s.t. |A ¢ B| is not large then H 2 (A+C) is large 8 not-too-small C. By Ruszas lemma |A+A| is not large ) by BKT |A ¢ A| is large. Since A µ A, |A ¢ A| is also large ) by Ruszas lemma |A ¢ B| is large – contradiction! 1. A plus-friendly, b 2 F ) Ab plus-friendly. 2. A, A plus-friendly, disjoint ) A [ A plus-friendly.

Our Goal: Prove A ¢ B+C close to having low c.p ) contradiction since A ¢ B+C is M - close to convex comb of A + ¢ B+C and A ¢ ¢ B+C, but a) H 2 (A + ¢ B+C) is large since convex comb of A + b+C and A + b is plus-friendly. b) H 2 (A ¢ ¢ B+C) is large since convex comb of A ¢ B+c which are permutations of A ¢ B. Assume H 2 (A ¢ B+C) not large. Well show A=A + [ A ¢ s.t. A +,A ¢ are disjoint and 1) A + is plus friendly (or A + is empty) 2) H 2 (A ¢ ¢ B) is large (or |A ¢ | · M 1- )

Our Goal: Prove A ¢ B+C close to having low c.p.. Assume H 2 (A ¢ B+C) not large. Well show A=A + [ A ¢ s.t. A +,A ¢ disjoint and 1) A + is plus friendly (or A + is empty) 2) H 2 (A ¢ ¢ B) is large (or |A ¢ | · M 1- ) We build partition iteratively. Initially A + = ;, A ¢ =A. Assume A ¢ is not-too-small (o/w were done). By Gowers lemma, 9 not-too-small subsets A µ A ¢, B µ B s.t. |A ¢ B| not large. By PF Lemma A is plus-friendly, remove A from A ¢ and add it to A +. Assume H 2 (A ¢ ¢ B) is not large (o/w were done).

Main Thm: 8 >0 9 c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | c=poly(1/ ) and poly-time E:{0,1} nc {0,1} n s.t. if 8 ind. r.v. X 1,…,X c w/ min-entropy ¸ n | E(X 1,…,X c ) – U n | 1 < 2 - (n) This finishes the proof of the Main Lemma and hence the Main Theorem. Main Lemma: 9 absolute constant >0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C) 0 s.t. for prime field F, and distributions A,B,C µ F, (with H(A)=H(B)=H(C)<0.8log|F|), the distribution A ¢ B+C is 2 - H(A) close to having entropy ¸ (1+ )H(A) 2 -10n

Another Result: A disperser for the case that all samples come from same distribution, which only requires (log n) entropy (using [EH]).

Open Problems Extractors/Dispersers with lower entropy requirement (k=n (1) or even k= (log n) ) Extractors/Dispersers with lower entropy requirement (k=n (1) or even k= (log n) ) Improvement for the case of two samples (related to constructing Ramsey graphs). Improvement for the case of two samples (related to constructing Ramsey graphs). More applications of results/techniques. More applications of results/techniques.