1 1. Joint withA.Ta-shma & D.Zuckerman 2. Improved: R.Shaltiel and C. Umans Slides: Adi Akavia Extractors via Low- degree Polynomials.

Slides:



Advertisements
Similar presentations
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
Advertisements

5.4 Basis And Dimension.
5.1 Real Vector Spaces.
Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
1. 2 Overview Review of some basic math Review of some basic math Error correcting codes Error correcting codes Low degree polynomials Low degree polynomials.
15-853:Algorithms in the Real World
1 Deciding Primality is in P M. Agrawal, N. Kayal, N. Saxena Presentation by Adi Akavia.
Having Proofs for Incorrectness
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 13 June 25, 2006
Derandomized DP  Thus far, the DP-test was over sets of size k  For instance, the Z-Test required three random sets: a set of size k, a set of size k-k’
Chapter 5 Orthogonality
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 12 June 18, 2006
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
11.Hash Tables Hsu, Lih-Hsing. Computer Theory Lab. Chapter 11P Directed-address tables Direct addressing is a simple technique that works well.
Elliptic Curve. p2. Outline EC over Z p EC over GF(2 n )
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
Deciding Primality is in P M. Agrawal, N. Kayal, N. Saxena Slides by Adi Akavia.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
1. 2 Gap-QS[O(1), ,2|  | -1 ] Gap-QS[O(n), ,2|  | -1 ] Gap-QS*[O(1),O(1), ,|  | -  ] Gap-QS*[O(1),O(1), ,|  | -  ] conjunctions of constant.
Private Information Retrieval. What is Private Information retrieval (PIR) ? Reduction from Private Information Retrieval (PIR) to Smooth Codes Constructions.
Introduction Polynomials
1 Chapter 5 A Measure of Information. 2 Outline 5.1 Axioms for the uncertainty measure 5.2 Two Interpretations of the uncertainty function 5.3 Properties.
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
CS151 Complexity Theory Lecture 10 April 29, 2004.
5.IV. Jordan Form 5.IV.1. Polynomials of Maps and Matrices 5.IV.2. Jordan Canonical Form.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability.
Variable-Length Codes: Huffman Codes
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP.
1 2 Introduction In this lecture we’ll cover: Definition of strings as functions and vice versa Error correcting codes Low degree polynomials Low degree.
Information Theory and Security
CS151 Complexity Theory Lecture 9 April 27, 2004.
Introduction to AEP In information theory, the asymptotic equipartition property (AEP) is the analog of the law of large numbers. This law states that.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
By: Hector L Contreras SSGT / USMC
Great Theoretical Ideas in Computer Science.
Quantum Computing MAS 725 Hartmut Klauck NTU TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Basic Concepts in Number Theory Background for Random Number Generation 1.For any pair of integers n and m, m  0, there exists a unique pair of integers.
Channel Capacity.
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
Great Theoretical Ideas in Computer Science.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Some Computation Problems in Coding Theory
1 Covering Non-uniform Hypergraphs Endre Boros Yair Caro Zoltán Füredi Raphael Yuster.
Approximation Algorithms based on linear programming.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Computational Molecular Biology
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
RS – Reed Solomon List Decoding.
The Curve Merger (Dvir & Widgerson, 2008)
Simple Extractors for all Min-Entropies R.Shaltiel and C.Umans.
Deciding Primality is in P
I. Finite Field Algebra.
Switching Lemmas and Proof Complexity
Presentation transcript:

1 1. Joint withA.Ta-shma & D.Zuckerman 2. Improved: R.Shaltiel and C. Umans Slides: Adi Akavia Extractors via Low- degree Polynomials

2 Definitions Def: The min-entropy of a random variable X over {0, 1} n is defined as: Thus a random variable X has min-entropy at least k if Pr[X=x]≤2 -k for all x. [Maximum possible min-entropy for such a R.V. is n] Def (statistical distance): Two distributions on a domain D are  -close if the probabilities they give to any A  D differ by at most  (namely, half the norm-1 of the distance)

3 Definitions Def: A (k,  )- extractor is a function E:  n  t    m s.t. for any R.V. X with min-entropy ≥k E(X,U t ) is  -close to U m (where U m denotes the uniform distribution over  m ) E Weak random source n Seed t Random string m

4 Parameters The relevant parameters are: min entropy of the weak random source – k. Relevant values log(n)  k  n (seed length is t ≥ log(n) hence no point consider lower min entropy). min entropy of the weak random source – k. Relevant values log(n)  k  n (seed length is t ≥ log(n) hence no point consider lower min entropy). seed length t ≥ log(n) seed length t ≥ log(n) Quality of the output:  Quality of the output:  Size of the output m=f(k). The optimum is m=k. Size of the output m=f(k). The optimum is m=k. E Weak random source n Seed t Random string m

5 Extractors 2n2n 2m2m 2t2t E High Min-Entropy distribution Min-Entropy distribution Uniform-distribution seed Close to uniform output

6 Next Bit Predictors Claim: to prove E is an extractor, it suffices to prove that for all 0<i<m+1 and all predictors f:  i-1  Proof: Assume E is not an extractor; then exists a distribution X s.t. E(X,U t ) is not  - close to U m, that is:

7 Proof Now define the following hybrid distributions:

8 Proof Summing the probabilities for the event corresponding to the set A for all distributions yields: And because |∑a i |≤ ∑|a i | there exists an index 0<i<m+1 for which:

9 The Predictor We now define a function f:  i-1    that can predict the i’th bit with probability at least ½+  /m (“a next bit predictor”): The function f uniformly and independently draws the bits y i,…,y m and outputs: Note: the above definition is not constructive, as A is not known!

10 Proof And f  is indeed a next bit predictor: Q.E.D.

11 Next-q-it List-Predictor f is allowed to output a small list of l possible next elements

12 q-ary Extractor Def: Let F be a field with q elements. A (k, l) q-ary extractor is a function E:  n  t  F m s.t. for all R.V. X with min-entropy ≥k and all 0<i<m and all list-predictors f:F i-1   F l

13 Generator Def: Define the generator matrix for the vector space F d as a matrix A  d×d, s.t. for any non-zero vector v  F d : (that is, any vector 0≠v  F d multiplied by all powers of A generates the entire vector space F d except for 0) Lemma: Such a generator matrix exists and can be found in time q O(d).

14 Strings as Low-degree Polynomials Let F be a field with q elements Let F be a field with q elements Let F d be a vector space over F Let F d be a vector space over F Let h be the smallest integer s.t. Let h be the smallest integer s.t. For x   n, let  denote the unique d-variate polynomial of total degree h-1 whose coefficients are specified by x. For x   n, let  denote the unique d-variate polynomial of total degree h-1 whose coefficients are specified by x. Note that for such a polynomial, the number of coefficients is exactly: (“choosing where to put d-1 bars between h-1 balls”)

15 The [SU] Extractor The definition of the q-ary extractor: E:  n  d log q  F m The definition of the q-ary extractor: E:  n  d log q  F m AmvAmvAmvAmv v AivAivAivAiv  (v)  (A i v)  (A m v  (A m v) FdFdFdFdv AivAivAivAiv AmvAmvAmvAmv seed, interpreted as a vector v  F d Generator matrix

16 Main Theorem Thm: For any n,q,d and h as previously defined, E is a (k, l) q-ary extractor if: Alternatively, E is a (k, l) q-ary extractor if:

17 What’s Ahead “counting argument” and how it works “counting argument” and how it works The reconstruction paradigm The reconstruction paradigm Basic example – lines in space Basic example – lines in space Proof of the main theorem Proof of the main theorem

18 Extension Fields A field F2 is called an extension of another field F if F is contained in F2 as a subfield. Thm: For every power p k (p prime, k>0) there is a unique (up to isomorphism) finite field containing p k elements. These fields are denoted GF(p k ) and comprise all finite fields. Def: A polynomial is called irreducible in GF(p) if it does not factor over GF(p) Thm: Let f(x) be an irreducible polynomial of degree k over GF(p). The set of degree k-1 polynomials over Z p, with addition coordinate-wise and multiplication modulo f(x) form the finite field GF(p k )

19 Extension Fields - Example Construct GF(2 5 ) as follows: Let the irreducible polynomial be: Represent every k degree polynomial as a vector of k+1 coefficient: Addition over this field:

20 Extension Fields - Example And multiplication: And now modulo the irreducible polynomial:

21 Generator Matrix – Existence Proof Denote by GF * (q d ) the multiplicative group of the Galois Field GF(q d ). This multiplicative group of the Galois Field is cyclic, and thus has a generator g: Let  be the natural isomorphism between the Galois Field GF(q d ) and the vector space F d, which matches a polynomial with its vector of coefficients:

22 Generator Matrix – Existence Proof Now define the generator matrix A of F d as the linear transformation that corresponds to multiplication by the generator in GF * (q d ) : A is a linear transformation because of the distributive property of both the vector space and the field GF(q d ), according to the isomorphism properties:

23 Generator Matrix – Existence Proof It remains to show that the generator matrix A of F d can be found in time q O(d). And indeed: The Galois Field GF(q d ) can be constructed in time q O(d) using an irreducible polynomial of degree d over the field Z q (and such a polynomial can also be found in time q O(d) by exhaustive search). The Galois Field GF(q d ) can be constructed in time q O(d) using an irreducible polynomial of degree d over the field Z q (and such a polynomial can also be found in time q O(d) by exhaustive search). The generator of GF(q d ) can be found in time q O(d) by exhaustive search The generator of GF(q d ) can be found in time q O(d) by exhaustive search Using the generator, for any basis of F d, one can construct d independent equations so as to find the linear transformation A. This linear equation system is also solvable in time q O(d). Using the generator, for any basis of F d, one can construct d independent equations so as to find the linear transformation A. This linear equation system is also solvable in time q O(d).

24 2 n X For Y  X, denote  (Y)=  y  Y Pr[y] (“the weight of Y”) Assume a mapping R:{0,1} a   {0,1} n, s.t. Pr x~X [  z R(z)=x]  ½ Then: for X uniform over a subset of 2 n, |X|  2 |R(S)| for X uniform over a subset of 2 n, |X|  2 |R(S)| for an arbitrary distribution X,  (X)  2  (R(S)) for an arbitrary distribution X,  (X)  2  (R(S)) If X is of min-entropy k, then  (R(S))  2 a ·2 -k = 2 a-k and therefore k  a + 1 ( 1 =  (X)  2  (R(S))  2 1+a-k ) Counting Argument 2 a S 2 a S R(S) R

25 “Reconstruction Proof Paradigm” Proof sketch: For a certain R.V. X with min-entropy k, assume by way of contradiction, a predictor f for the q-ary extractor. For a<<k construct a function R:{0,1} a   {0,1} n -- the “reconstruction function”-- that uses f as an oracle and: By the “counting argument”, this implies X has min-entropy much smaller than k

26 Basic Example – Lines Construction: Let BC:F  {0,1} s be a (inefficient) binary-code Let BC:F  {0,1} s be a (inefficient) binary-code Given Given x, a weak random source, interpreted as a polynomial  :F 2  F and x, a weak random source, interpreted as a polynomial  :F 2  F and s, a seed, interpreted as a random point (a,b), and an index j to a binary code. s, a seed, interpreted as a random point (a,b), and an index j to a binary code. Def: Def:

27 Basic Example – Illustration of Construction x   s = ((a,b), 2) x   s = ((a,b), 2) E(x,s)=01001 E(x,s)= (inefficient) binary code (a,b) (a,b+m)(a,b+1)  (a,b)  (a,b+1)  (a,b+m)

28 Basic Example – Proof Sketch Assume, by way of contradiction, there exists a predicator function f. Assume, by way of contradiction, there exists a predicator function f. Next, show a reconstruction function R, s.t. Next, show a reconstruction function R, s.t. Conclude, a contradiction! (to the min-entropy assumption of X) Conclude, a contradiction! (to the min-entropy assumption of X)

29 Basic Example – Reconstruction Function Random line List decoding by the predictor f Resolve into one value on the line F d Repeat using the new points, until all F d is evaluated h ~ n 1/2 j ~ lgn m ~ desired entropy “advice” “Few” red points: a=mjO(h)

30 Problems with the above Construction Too many lines! Too many lines! Takes too many bits to define a subspace Takes too many bits to define a subspace

31 Proof Sketch Let X be a random variable with min-entropy at least k Let X be a random variable with min-entropy at least k Assume, by way of contradiction: exists a next bit predicator function f. Assume, by way of contradiction: exists a next bit predicator function f. Next, show a reconstruction function R Next, show a reconstruction function R Conclude, a contradiction! (to the min-entropy assumption of X) Conclude, a contradiction! (to the min-entropy assumption of X)

32 Main Lemma Lemma: Let n,q,d,h be as in the main theorem. There exists a probabilistic function R:  a  n with a = O(mhd logq) such that for every x on which: The following holds (the probability is over the random coins of R):

33 The Reconstruction Function (R) Task: allow many strings x in the support of X to be reconstructed from very short advice strings. Task: allow many strings x in the support of X to be reconstructed from very short advice strings. Outlines: Outlines: Use f in a sequence of prediction steps to evaluate  on all points of F d,. Use f in a sequence of prediction steps to evaluate  on all points of F d,. Interpolate to recover coefficients of , Interpolate to recover coefficients of , which gives x which gives x Next We Show: there exists a sequence of prediction steps that works for many x in the support of X and requires few advice strings

34 Curves Let r=  (d), Let r=  (d), Pick random vectors and values Pick random vectors and values 2r random points y 1,…,y 2r  F d, and 2r random points y 1,…,y 2r  F d, and 2r values t 1,…,t 2r  F, and 2r values t 1,…,t 2r  F, and Define degree 2r-1 polynomials p 1,p 2 Define degree 2r-1 polynomials p 1,p 2 p 1 :F  F d defined by p 1 (t i )=y i,  i=1,..,2r. p 1 :F  F d defined by p 1 (t i )=y i,  i=1,..,2r. p 2 :F  F d defined by p 2 (t i )=Ay i,  i=1,..,r, and p 2 (t i )=y i,  i=r+1,..,2r. p 2 :F  F d defined by p 2 (t i )=Ay i,  i=1,..,r, and p 2 (t i )=y i,  i=r+1,..,2r. Define vector sets P 1 ={p 1 (z)} z  F and P 2 ={p 2 (z)} z  F Define vector sets P 1 ={p 1 (z)} z  F and P 2 ={p 2 (z)} z  F  i>0 define P 2i+1 =AP 2i-1 and P 2i+2 =AP 2i ({Pi}, the sequence of prediction steps are low-degree curves in F d, chosen using the coin tosses of R)  i>0 define P 2i+1 =AP 2i-1 and P 2i+2 =AP 2i ({Pi}, the sequence of prediction steps are low-degree curves in F d, chosen using the coin tosses of R)

35 t1t1 t2t2 trtr t r+1 t 2r F FdFd y1y1 y2y2 yryr y r+1 y 2r AivAivAivAiv v AmvAmvAmvAmv v AivAivAivAiv AmvAmvAmvAmv A(y 1 ) A(y 2 ) A(y r ) A(y r+1 ) A(y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i* (y r+1 ) A i* (y 2r ) A 2 (y 1 ) A 2 (y 2 ) A(y r ) A 2 (y r+1 ) A 2 (y 2r ) A(y 1 ) A(y 2 ) A(y r ) y r+1 y 2r A 2 (y 1 ) A 2 (y 2 ) A 2 (y r ) A(y r+1) ) A(y 2r ) A 3 (y 1 ) A 3 (y 2 ) A 3 (y r ) A 2 (y r+1) ) A 2 (y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i*-1 (y r+1) ) A i*-1 (y 2r ) Curves

36 Simple Observations A is non-singular linear-transform, hence  i A is non-singular linear-transform, hence  i P i is 2r-wise independent collection of points P i is 2r-wise independent collection of points P i and P i+1 intersect at r random points P i and P i+1 intersect at r random points  |Pi is a univariate polynomial of degree at most 2hr.  |Pi is a univariate polynomial of degree at most 2hr. Given evaluation of  on Av,A 2 v,…,A m v, we may use the predictor function f to predict  (A m+1 v) to within l values. Given evaluation of  on Av,A 2 v,…,A m v, we may use the predictor function f to predict  (A m+1 v) to within l values. We need advice string: 2hr coefficients of  |Pi for i=1,…,m. (l) We need advice string: 2hr coefficients of  |Pi for i=1,…,m. (length: at most mhr log q ≤ a)

37 t1t1 t2t2 trtr t r+1 t 2r F FdFd y1y1 y2y2 yryr y r+1 y 2r v AivAivAivAiv AmvAmvAmvAmv A(y 1 ) A(y 2 ) A(y r ) A(y r+1 ) A(y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i* (y r+1 ) A i* (y 2r ) A 2 (y 1 ) A 2 (y 2 ) A(y r ) A 2 (y r+1 ) A 2 (y 2r ) A(y 1 ) A(y 2 ) A(y r ) y r+1 y 2r A 2 (y 1 ) A 2 (y 2 ) A 2 (y r ) A(y r+1) ) A(y 2r ) A 3 (y 1 ) A 3 (y 2 ) A 3 (y r ) A 2 (y r+1) ) A 2 (y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i*-1 (y r+1) ) A i*-1 (y 2r ) Using N.B.P. Cannot resolve into one value!

38 Using N.B.P. t1t1 t2t2 trtr t r+1 t 2r F FdFd y1y1 y2y2 yryr y r+1 y 2r v AivAivAivAiv AmvAmvAmvAmv A(y 1 ) A(y 2 ) A(y r ) A(y r+1 ) A(y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i* (y r+1 ) A i* (y 2r ) A 2 (y 1 ) A 2 (y 2 ) A(y r ) A 2 (y r+1 ) A 2 (y 2r ) A(y 1 ) A(y 2 ) A(y r ) y r+1 y 2r A 2 (y 1 ) A 2 (y 2 ) A 2 (y r ) A(y r+1) ) A(y 2r ) A 3 (y 1 ) A 3 (y 2 ) A 3 (y r ) A 2 (y r+1) ) A 2 (y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i*-1 (y r+1) ) A i*-1 (y 2r ) A i*+1 (y 1 ) A i*+1 (y 2 ) A i*+1 (y r ) Can resolve into one value using the second curve!

39 Using N.B.P. t1t1 t2t2 trtr t r+1 t 2r F FdFd y1y1 y2y2 yryr y r+1 y 2r v AivAivAivAiv AmvAmvAmvAmv A(y 1 ) A(y 2 ) A(y r ) A(y r+1 ) A(y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i* (y r+1 ) A i* (y 2r ) A 2 (y 1 ) A 2 (y 2 ) A(y r ) A 2 (y r+1 ) A 2 (y 2r ) A(y 1 ) A(y 2 ) A(y r ) y r+1 y 2r A 2 (y 1 ) A 2 (y 2 ) A 2 (y r ) A(y r+1) ) A(y 2r ) A 3 (y 1 ) A 3 (y 2 ) A 3 (y r ) A 2 (y r+1) ) A 2 (y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i*-1 (y r+1) ) A i*-1 (y 2r ) A i*+1 (y 1 ) A i*+1 (y 2 ) A i*+1 (y r ) Can resolve into one value using the second curve! y r+1 y 2r

40 Open Problems Is the [SU] extractor optimal? Just run it for longer sequences Is the [SU] extractor optimal? Just run it for longer sequences Reconstruction technique requires interpolation from h (the degree) points, hence maximal entropy extracted is k/h Reconstruction technique requires interpolation from h (the degree) points, hence maximal entropy extracted is k/h The seed --a point-- requires logarithmic number of bits The seed --a point-- requires logarithmic number of bits

41 Main Lemma Proof Cont. Claim: with probability at least 1-1/8q d over the coins tosses of R: Claim: with probability at least 1-1/8q d over the coins tosses of R: Proof: We use the following tail bound: Proof: We use the following tail bound: Let t>4 be an even integer, and X1,…,Xn be t-wise independent R.V. with values in [0,1]. Let X=  Xi,  =E[X], and A>0. Then:

42 Main Lemma Proof Cont. According to the next bit predictor, the probability for successful prediction is at least 1/2√l. According to the next bit predictor, the probability for successful prediction is at least 1/2√l. In the i’th iteration we make q predictions (as many points as there are on the curve). In the i’th iteration we make q predictions (as many points as there are on the curve). Using the tail bounds provides the result. Using the tail bounds provides the result. Q.E.D (of the claim). Main Lemma Proof (cont.): Therefore, w.h.p. there are at least q/4√l evaluations points of P i that agree with the degree 2hr polynomial on the i’th curve (out of a total of at most lq).

43 Main Lemma Proof Cont. A list decoding bound: given n distinct pairs (x i,y i ) in field F and Parameters k and d, with k>(2dn) 1/2, There are at most 2n/k degree d polynomials g such that g(x i )=y i for at least k pairs. A list decoding bound: given n distinct pairs (x i,y i ) in field F and Parameters k and d, with k>(2dn) 1/2, There are at most 2n/k degree d polynomials g such that g(x i )=y i for at least k pairs. Furthermore, a list of all such polynomials can be computed in time poly(n,log|F|). Furthermore, a list of all such polynomials can be computed in time poly(n,log|F|). Using this bound and the previous claim, at most 8l 3/2 degree 2rh polynomials agree on this number of points (q/4√l ). Using this bound and the previous claim, at most 8l 3/2 degree 2rh polynomials agree on this number of points (q/4√l ).

44 Lemma Proof Cont. Now, Now, P i intersect P i-1 at r random positions, and P i intersect P i-1 at r random positions, and we know the evaluation of  at the points in P i-1 we know the evaluation of  at the points in P i-1 Two degree 2rh polynomials can agree on at most 2rh/q fraction of their points, Two degree 2rh polynomials can agree on at most 2rh/q fraction of their points, So the probability that an “incorrect” polynomial among our candidates agrees on all r random points in at most So the probability that an “incorrect” polynomial among our candidates agrees on all r random points in at most

45 Main Lemma Proof Cont. So, with probability at least we learn points P i successfully. So, with probability at least we learn points P i successfully. After 2q d prediction steps, we have learned  on F d \{0} (since A is a generator of F d \{0}) After 2q d prediction steps, we have learned  on F d \{0} (since A is a generator of F d \{0}) by the union bound, the probability that every step of the reconstruction is successful is at least ½. by the union bound, the probability that every step of the reconstruction is successful is at least ½. Q.E.D (main lemma)

46 First, First, By averaging argument: By averaging argument: Therefore, there must be a fixing of the coins of R, such that: Therefore, there must be a fixing of the coins of R, such that: Proof of Main Theorem Cont.

47 Using N.B.P. – Take 2 t1t1 t2t2 trtr t r+1 t 2r F FdFd y1y1 y2y2 yryr y r+1 y 2r v AivAivAivAiv AmvAmvAmvAmv A(y 1 ) A(y 2 ) A(y r ) A(y r+1 ) A(y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i* (y r+1 ) A i* (y 2r ) A 2 (y 1 ) A 2 (y 2 ) A(y r ) A 2 (y r+1 ) A 2 (y 2r ) A(y 1 ) A(y 2 ) A(y r ) y r+1 y 2r A 2 (y 1 ) A 2 (y 2 ) A 2 (y r ) A(y r+1) ) A(y 2r ) A 3 (y 1 ) A 3 (y 2 ) A 3 (y r ) A 2 (y r+1) ) A 2 (y 2r ) A i* (y 1 ) A i* (y 2 ) A i* (y r ) A i*-1 (y r+1) ) A i*-1 (y 2r ) A i*+1 (y 1 ) A i*+1 (y 2 ) A i*+1 (y r ) Unse N.B.P over all points in F, so that we get enough ”good evaluation”

48 Proof of Main Theorem Cont. According to the counting argument, this implies that: According to the counting argument, this implies that: Recall that r=  (d). Recall that r=  (d). A contradiction to the parameter choice: A contradiction to the parameter choice: Q.E.D (main theorem)!

49 From q-ary extractors to (regular) extractors The simple technique - using error correcting codes: Lemma: Let F be a field with q elements. Let C:  k=log(q)  n be a binary error correcting code with distance at least 0.5-O(  2 ). If E:  n  t   F m is a (k,O(  )) q-ary extractor, then E’:  n  t+log(n)   F m defined by: Is a (k,  m) binary extractor.

50 From q-ary extractors to (regular) extractors A more complex transformation from q-ary extractors to binary extractors achieves the following parameters: Thm: Let F be a field with q<2 m elements. There is a polynomial time computable function: Such that for any (k,  ) q-ary extractor E, E’(x;(y,j))=B(E(x;y),j) is a (k,  log*m) binary extractor.

51 From q-ary extractors to (regular) extractors The last theorem allows using theorem 1 for  = O(  /log*m), and implies a (k,  ) extractor with seed length t=O(log n) and output length m=k/(log n) O(1)

52 Extractor  PRG Identify: Identify: string x  {0,1} log n with the string x  {0,1} log n with the function x:{0,1} log n  {0,1} by setting x(i)=x i function x:{0,1} log n  {0,1} by setting x(i)=x i Denote by S(x) the size of the smallest circuit computing function x Denote by S(x) the size of the smallest circuit computing function x Def (PRG): an  -PRG for size s is a function G:{0,1} t  {0,1} m with the following property:  1  i  m and all function f:{0,1} i-1  {0,1} i with size s circuits, Pr[f(G(U t ) 1...i-1 )=G(U t ) i ]  ½ +  /m This imply: for all size s-O(1) circuits C |Pr[C(G(Ut))=1] – Pr[C(Um)=1]|  

53 q-ary PRG Def (q-ary PRG): Let F be the field with q elements. A  -q-ary PRG for size s is a function G:{0,1} t  F m with the following property:  1  i  m and all function f:F i-1  F (  -2) with size s circuits, Pr[  j f(G(U t ) 1...i-1 ) j =G(U t ) i ]   Fact: O(  )-q-ary PRG for size s can be transformed into (regular) m  -PRG for size not much smaller than s

54 The Construction Plan for building a PRG G x :{0,1} t  {0,1} m : use a hard function x:{0,1} log n  {0,1} use a hard function x:{0,1} log n  {0,1} let  be the low-degree extension of x let  be the low-degree extension of x obtain l “candidate” PRGs, where l=d(log q / log m) as follows: For 0  j<l define G x (j) :{0,1} d log q  F m by G x (j) (v) =  (A 1  m j v)   (A 2  m j v) ...   (A M  m j v) where A is a generator of F d \{0} obtain l “candidate” PRGs, where l=d(log q / log m) as follows: For 0  j<l define G x (j) :{0,1} d log q  F m by G x (j) (v) =  (A 1  m j v)   (A 2  m j v) ...   (A M  m j v) where A is a generator of F d \{0} Note: G x (j) corresponds to using our q-ary extractor construction with the “successor function” A m j We show: x is hard  at least one G x (j) is a q-ary PRG

55 Getting into Details Let F’ be a subfield of F of size h Lemma: there exist invertible d  d matrices A and A’ with entries from F which satisfy:  v  F d s.t. v  0, {A i v} i =F d \{0}  v  F d s.t. v  0, {A i v} i =F d \{0}  v  F’ d s.t. v  0, {A’ i v} i =F’ d \{0}  v  F’ d s.t. v  0, {A’ i v} i =F’ d \{0} A’=A p for p=(q d -1)/(h d -1) A’=A p for p=(q d -1)/(h d -1) A and A’ can be found in time q O(d) A and A’ can be found in time q O(d) think of F d as both a vector space and the extension field of F Note F’ d is a subset of F d perhaps we should just say: immediate from the correspondence between the cyclic group GF(q d ) and F d \{0} ??? otherwise in details we may say: Proof: There exists a natural correspondence between F d and GF(q d ), and between F’ d and GF(h d ), There exists a natural correspondence between F d and GF(q d ), and between F’ d and GF(h d ), GF(q d )here exists a generator g GF(q d ) is cyclic of order q d -1, i.e. there exists a generator g g p generates the unique subgroup of order h d -1, the multiplicative group of GF(h d ). g p generates the unique subgroup of order h d -1, the multiplicative group of GF(h d ). A and A’ are the linear transforms corresponding to g and g p respectively. A and A’ are the linear transforms corresponding to g and g p respectively.

56 require h d >n require h d >n Define  as follows  (A’ i 1)=x(i), where 1 is the all 1 vector (low degree extension). Define  as follows  (A’ i 1)=x(i), where 1 is the all 1 vector (low degree extension). Recall: For 0  j<l define G x (j) :{0,1} d log q  F m by G x (j) (v) =  (A 1  m j v)   (A 2  m j v) ...   (A M  m j v Recall: For 0  j<l define G x (j) :{0,1} d log q  F m by G x (j) (v) =  (A 1  m j v)   (A 2  m j v) ...   (A M  m j v Theorem (PRG main): for every n,d, and h satisfying h d >n, at least one of G x (j) is an  -q- ary PRG for size  (  -4 h d 2 log 2 q). Furthermore, all the G x (j) s are computable in time poly(q d,n) with oracle access to x. since h d >n, there are enough “slots” to embed all x in a d dimensional cube of size h d since h d >n, there are enough “slots” to embed all x in a d dimensional cube of size h d and since A’ generates F’ d \{0}, indeed x is embedded in a d dimensional cube of size h d and since A’ generates F’ d \{0}, indeed x is embedded in a d dimensional cube of size h d Note h denotes the degreeindividual variables, and the total degree is at most hd Note h denotes the degree in individual variables, and the total degree is at most hd The computation of  from x can be done in poly(n,q d )=q O(d) time The computation of  from x can be done in poly(n,q d )=q O(d) time

57  

58

59 Extension Field Def: if F is a subset of E, then we say that E is an extension field of F. Lemma: let E be an extension field of F, E be an extension field of F, f(x) be a polynomial over F (i.e. f(x)  F[X]), f(x) be a polynomial over F (i.e. f(x)  F[X]), c  E, c  E, then f(x)  f(c) is an homomorphism of F[X] into E.

60 Construction of the Galois Field GF(q d ) Thm: let p(x) be irreducible in F[X], then there exists E, an extension field of F, where there exists a root of p(x). Proof Sketch: add a  (a new element) to F.  is to be a root of p(x). add a  (a new element) to F.  is to be a root of p(x). In F[  ] (polynomials with variable  ) In F[  ] (polynomials with variable  )

61 Example: Example: F=reals F=reals p(x)=x 2 +1 p(x)=x 2 +1